This package contains code for the paper: "SocRATES: Towards Automated Scenario-based Testing of Social Navigation Algorithms"
A ROS2-Gazebo scenario generator for evaluating social navigation algorithms utilizing LLMs and Hunavsim
- This package is tested on ROS2 Humble, Gazebo v11.10.2 with python 3.9. Currently, we support querying any GPT-based model.
- Install the required python packages from requirements.txt
SoNAS consists of 3 submodules:
- A scenario generation module that queries LLMs in a structured manner to generate parameters and files for simulating a scenario
- Modified version of Hunavsim
- Modified version of the Hunavsim wrapper for gazebo
- For the 2 ROS2 packages, clone the submodules into your catkin workspace and build with the ROS2 colcon
tool
You can generate a scenario in 3 steps:
- (optional) Annotate your map file with a scene graph
- Generate components of the scenario (robot&human trajectories and human behaviors)
- Simulate the scenario in Gazebo
- We have provided sample maps in the
locations
folder for reference. This step is optional. If you just want to simulate scenarios in the warehouse environment, skip this step.
- Update
annotator/config.yaml
with the scene graph schema for your map, and the paths to the image files. You can also tune thezoom_in
parameter to comfortably annotate the image and thezoom_out
parameter to save the annotated image with the corresponding zoom. The schema already filled in is for a warehouse map. Use easy to understand/self-explanatory edge and node names. For example, simple-edge or simple-node for un-noteworthy edges and nodes respectively and specific names (like blind corner,intersection etc.) for important characterizing locations in the map. - Annotate your map image with the annotator tool:
cd annotator
python annotator.py
- Follow the instructions in the terminal to generate a map image annotated with a scene graph. An example is shown in the warehouse annotated map. Double click to add nodes and click and drag between 2 nodes to create edges. Remember to keep the terminal open, where you will be asked to enter the type of node and edge depending on your map schema.
- Configure the
inputs.yaml
file and specify the parameters for generating the scenario. Remember to specify the paths for the ROS2 packages depending on your ROS2 workspace. The main inputs are (check out the examples insample_inputs.md
):context
: Define the social context in which the robot is performing its taskrobot task
: What is the robot's task when navigating the scenario?rough scenario
: Specify if you want to generate a specific scenario. More specific and well defined scenarios tend to work better (DefaultNone
).
- If you want to reuse parts of a previously generated scenario like the scenario description, trajectory or the behaviors, set the corresponding variable
load__component_to_load: true
. For example, to regenerate only the human behaviors and keep the scenario and trajectories previously generated, use:
load_scenario_response: true
load_trajectory_response: true
load_bt_response: false
- Run the scenario generator with
python main.py
.
- You will be asked for confirmations for the generated scenarios and trajectories.
- The scenario generator could fail (LLMs are imperfect.), the script will try to retry scenario generation multiple times, please rerun if there is a full failure.
- If the trajectories or behaviors fail to generate, the scenarios maybe regenerated. You can refer to the scene graph image to inspect the proposed paths for the human and the robot.
- In a separate terminal, source the environment with ROS2 installed.
If you've set the paths correctly, the scenario generation script should modify the files in the
hunav_gazebo_wrapper
module directy to orchestrate the scenario.
- Build your ROS2 workspace with:
cd catkin_ws #change to your corresponding workspace
colcon build --packages-select hunav_gazebo_wrapper hunav_sim
source install/setup.bash
- Launch scenario with
ros2 launch hunav_gazebo_wrapper scenario.launch.py
- Due to the many ways in which LLMs can go wrong despite error handling approaches, if there's a (non-ROS related) error in this command, please regenerate the scenario.
- The scenario starts in a pause state, giving you time to open any other analysis tools or nodes. Remember to unpause the gazebo simulation with Space-bar.
- Control the robot:
- To Teleop: Run the teleop node. E.g.:
ros2 run hunav_gazebo_wrapper teleop_keyboard_twist.py
to control the robot in another terminal. - Use Nav2 planners:
ros2 launch hunav_gazebo_wrapper nav2_follow_waypoints.launch.py params_file:=<nav2_params_file>
.
- To Teleop: Run the teleop node. E.g.:
- Also note that we throttle the movements of the humans in order to increase the likelihood of the generated scenario to occur in the simulation (for example, we make the human wait at a previous waypoint if the robot has not yet reached the point where the human and the robot must meet for the scenario. )
- The humans and the robot in our framework can communicate through gestures. Run the following script to observe when the robot/human make a gesture:
ros2 run hunav_gazebo_wrapper gesture_listener.py
. Publish to the/robot_gesture
topic to make gestures towards humans with:ros2 run rqt_publisher rqt_publisher
. The integer robot gestures have the following semantic meaning: [0(No gesture), 1("WAIT"), 2("PROCEED"),3("ACKNOWLEDGED"),4("EXCUSE ME")].
- You can finetune the scenario parameters by modifying the saved files in the
responses
directory and setting theload
parameters (likeload_scenario_response
) to true and then runningpython main.py
. This will not regenerate the scenario, but update the gazebo simulation.
To cite our work, please use:
@misc{marpally2024socratesautomatedscenariobasedtesting,
title={SocRATES: Towards Automated Scenario-based Testing of Social Navigation Algorithms},
author={Shashank Rao Marpally and Pranav Goyal and Harold Soh},
year={2024},
eprint={2412.19595},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2412.19595},
}