Skip to content

NVIDIA's hardware-accelerated 3D scene reconstruction and Nav2 local costmap provider using nvblox

License

Notifications You must be signed in to change notification settings

dhruvkm2402/NVIDIA-Isaac-ROS-Nvblox

 
 

Repository files navigation

Isaac ROS Nvblox

Overview

Isaac ROS Nvblox contains ROS 2 packages for 3D reconstruction and cost maps for navigation. nvblox_nav2 processes depth and pose to reconstruct a 3D scene in real-time and a 2D costmap for Nav2. The costmap is used in planning during navigation as a vision-based solution to avoid obstacles.

nvblox_nav2 is designed to work with stereo cameras, which provide a depth image, and the corresponding pose uses GPU acceleration to compute 3D reconstruction and 2D costmaps using nvblox.

Above is a typical graph that uses nvblox_nav2. The input color image corresponding to the depth image is processed with unet, using the PeopleSemSegNet DNN model to estimate a segmentation mask for persons in the color image. Pose corresponding to the depth image is computed using visual_slam. The resulting persons mask and pose is used with the color image and depth image to perform 3D scene reconstruction. The output costmap is provided through a cost map plugin into Nav2, with an optional colorized 3D reconstruction into a Rviz using the mesh visualization plugin.

nvblox_nav2 builds the reconstructed map in the form of a TSDF (Truncated Signed Distance Function) stored in a 3D voxel grid. This approach is similar to 3D occupancy grid mapping approaches in which occupancy probabilities are stored at each voxel. However, TSDF-based approaches like nvblox store the (signed) distance to the closest surface at each voxel. The surface of the environment can then be extracted as the zero-level set of this voxelized function. Typically, TSDF-based reconstructions provide higher quality surface reconstructions.

In addition to their use in reconstruction, distance fields are also useful for path planning because they provide an immediate means of checking whether potential future robot positions are in collision.

People are common obstacles for mobile robots, and while part of a costmap, people should not be part of the 3D reconstruction. Planners that provide behavioral awareness by navigating differently depending on their proximity to people, benefit from a costmap for people. Person segmentation is computed using the color image, with the resulting mask applied to the depth image separating depth into scene depth and person depth images. The scene depth image is forwarded to TSDF mapping as explained above, the depth image for people is processed to an occupancy grid map.

To relax the assumption that occupancy grid maps only capture static objects, Nvblox applies an occupancy decay step. At a fixed frequency, all voxel occupancy probabilities are decayed towards 0.5 over time. This means that the state of the map (occupied or free) becomes less certain after it has fallen out of the field of view, until it becomes unknown (0.5 occupancy probability).

Table of Contents

Latest Update

Update 2023-04-05: Human reconstruction and new weighting functions.

Supported Platforms

This package is designed and tested to be compatible with ROS 2 Humble running on Jetson or an x86_64 system with an NVIDIA GPU.

Note: Versions of ROS 2 earlier than Humble are not supported. This package depends on specific ROS 2 implementation features that were only introduced beginning with the Humble release.

Platform Hardware Software Notes
Jetson Jetson Orin
Jetson Xavier
JetPack 5.1.1 For best performance, ensure that power settings are configured appropriately.
x86_64 NVIDIA GPU Ubuntu 20.04+
CUDA 11.8+

Docker

To simplify development, we strongly recommend leveraging the Isaac ROS Dev Docker images by following these steps. This will streamline your development environment setup with the correct versions of dependencies on both Jetson and x86_64 platforms.

Note: All Isaac ROS Quickstarts, tutorials, and examples have been designed with the Isaac ROS Docker images as a prerequisite.

Quickstart

  1. Set up your development environment by following the instructions here.

  2. Clone this repository and its dependencies under ~/workspaces/isaac_ros-dev/src or /ssd/workspaces/isaac_ros-dev/src depending upon SD card or SSD setup.

    cd ~/workspaces/isaac_ros-dev/src

    Note: For Jetson setup with SSD as optional storage:

    cd /ssd/workspaces/isaac_ros-dev/src
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common
    git clone --recurse-submodules https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox && \
        cd isaac_ros_nvblox && git lfs pull
  3. Pull down a ROS Bag of sample data appropriately depending upon SD card or SSD setup:

    cd ~/workspaces/isaac_ros-dev/src/isaac_ros_nvblox && \ 
      git lfs pull -X "" -I "nvblox_ros/test/test_cases/rosbags/nvblox_pol"

    Note: For Jetson setup with SSD as optional storage:

    cd /ssd/workspaces/isaac_ros-dev/src/isaac_ros_nvblox && \ 
      git lfs pull -X "" -I "nvblox_ros/test/test_cases/rosbags/nvblox_pol"
  4. Launch the Docker container using the run_dev.sh script (ISAAC_ROS_WS environment variable will take care of the correct path depending upon SD card or SSD setup as mentioned here):

    Note: This step requires access to the internet to be able to build and launch the Docker container properly!

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
      ./scripts/run_dev.sh ${ISAAC_ROS_WS}
  5. Inside the container, install package-specific dependencies via rosdep:

    cd /workspaces/isaac_ros-dev/ && \
        rosdep install -i -r --from-paths src --rosdistro humble -y --skip-keys "libopencv-dev libopencv-contrib-dev libopencv-imgproc-dev python-opencv python3-opencv nvblox"
  6. Build and source the workspace:

    cd /workspaces/isaac_ros-dev && \
      colcon build --symlink-install && \
      source install/setup.bash
  7. (Optional) Run tests to verify complete and correct installation:

    colcon test --executor sequential
  8. In the current terminal inside the Docker container, run the launch file for Nvblox with nav2:

    ros2 launch nvblox_examples_bringup isaac_sim_example.launch.py
  9. Open a second terminal and enter inside the running Docker container (similar to Step 4):

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
      ./scripts/run_dev.sh ${ISAAC_ROS_WS}
  10. In the second terminal inside the Docker container, play the ROS Bag:

    ros2 bag play src/isaac_ros_nvblox/nvblox_ros/test/test_cases/rosbags/nvblox_pol

You should see the robot reconstructing a mesh, with the 2D ESDF slice overlaid on top.

Next Steps

Try More Examples

The launch files for all examples are available in the nvblox_examples_bringup package:

Launch file Arguments Description
isaac_sim_example.launch.py run_nav2, run_rviz Example to run with Isaac Sim (tutorial)
isaac_sim_humans_example.launch.py run_nav2, run_rviz Example to run with Isaac Sim including human reconstruction (tutorial)
realsense_example.launch.py from_bag, bag_path, run_rviz Example to run with a realsense camera (tutorial)
realsense_humans_example.launch.py from_bag, bag_path, run_rviz Example to run with a realsense camera including human reconstruction (tutorial)
record_realsense.launch.py launch_realsense, run_rqt Record realsense data to replay with the above examples (tutorial)

Customize your Dev Environment

To customize your development environment, reference this guide.

Packages Overview

  • isaac_ros_nvblox: A meta-package.
  • nvblox_examples_bringup: Launch files and configurations for launching the examples.
  • nvblox_image_padding: Node to pad and crop images for adjusting the image size to the fixed input resolution that is required by the image segmentation network.
  • nvblox_isaac_sim: Contains scripts for launching Isaac Sim configured for use with nvblox.
  • realsense_splitter: Node for using the realsense camera with inbuilt projector. See why this is needed here.
  • semantic_label_conversion: Package for converting semantic labels coming from Isaac Sim to mask images used by nvblox (readme).
  • nvblox_msgs: Custom messages for transmitting the output distance map slice and mesh over ROS 2.
  • nvblox_nav2: Contains a custom plugin that allows ROS 2 Nav2 to consume nvblox distance map outputs.
  • nvblox_performance_measurement: Multiple packages containing tools for measuring nvblox performance (readme).
  • nvblox_ros: The ROS 2 wrapper for the core reconstruction library and the nvblox node.
  • nvblox_ros_common: Package providing repository wide utility functions.
  • nvblox_rviz_plugin: A plugin for displaying nvblox's (custom) mesh type in RVIZ.
  • [submodule] nvblox: The core (ROS independent) reconstruction library.

ROS 2 Parameters

Find all available ROS 2 parameters here.

ROS 2 Topics and Services

Find all ROS 2 subscribers, publishers and services here.

Troubleshooting

Isaac ROS Troubleshooting

For solutions to problems with Isaac ROS, please check here.

realsense-ros packages don't build with ROS 2 Humble

Please follow the workaround here.

Troubleshooting the Nvblox Realsense Example

See our troubleshooting page here.

Troubleshooting ROS 2 communication issues

If it looks like you are dropping messages or you are not receiving any messages, please consult our troubleshooting page here.

Updates

Date Changes
2023-04-05 Human reconstruction and new weighting functions.
2022-12-10 Updated documentation.
2022-10-19 Updated OSS licensing
2022-08-31 Update to be compatible with JetPack 5.0.2. Serialization of nvblox maps to file. Support for 3D LIDAR input and performance improvements.
2022-06-30 Support for ROS 2 Humble and miscellaneous bug fixes.
2022-03-21 Initial version.

About

NVIDIA's hardware-accelerated 3D scene reconstruction and Nav2 local costmap provider using nvblox

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 77.6%
  • Cuda 12.8%
  • CMake 9.6%