RealSense Camera Examples

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_nvblox/realsense_example.gif/

This page contains tutorials for running Isaac ROS Nvblox together with Isaac ROS Visual SLAM on an Intel RealSense camera.

Note

This tutorial requires a compatible RealSense camera from the list of available cameras.

Install

  1. Complete the Isaac ROS NvBlox RealSense Setup tutorial.

  2. Complete the nvblox quickstart.

  3. If you installed nvblox as a Debian package, you will also need to clone isaac_ros_nvblox under ${ISAAC_ROS_WS}/src:

    cd ${ISAAC_ROS_WS}/src
    git clone --recursive -b release-3.2 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox.git isaac_ros_nvblox
    
  4. Stop Git tracking the COLCON_IGNORE file in the realsense_splitter package and remove it.

    cd ${ISAAC_ROS_WS}/src/isaac_ros_nvblox/nvblox_examples/realsense_splitter && \
        git update-index --assume-unchanged COLCON_IGNORE && \
        rm COLCON_IGNORE
    

    Note

    Note: The COLCON_IGNORE file was added to remove the dependency to realsense-ros for users that don’t want to run the RealSense examples.

  5. Launch the Docker container using the run_dev.sh script (if not already launched):

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  6. Build the realsense_splitter package:

    cd /workspaces/isaac_ros-dev
    colcon build --symlink-install --packages-up-to realsense_splitter
    source install/setup.bash
    

RealSense Example

This example runs nvblox-based reconstruction from a single RealSense camera, either from live data coming directly off a RealSense camera, or from recorded data coming from a ROSbag.

  1. Start the Isaac ROS Dev Docker container (if not started in the install step)

    cd $ISAAC_ROS_WS && ./src/isaac_ros_common/scripts/run_dev.sh
    
  2. Navigate (inside the docker) to the workspace folder, and source the workspace

    cd /workspaces/isaac_ros-dev
    source install/setup.bash
    
  3. Run the RealSense example, either live from a sensor or from a recorded ROSbag.

    ros2 launch nvblox_examples_bringup realsense_example.launch.py
    

Note

If you want to restrict odometry to a 2D plane (for example, to run a robot in a flat environment), you can use the enable_ground_constraint_in_odometry argument.

Note

Based on how a RealSense camera is mounted to the platform, users are expected to tune ESDF slice height specified in nvblox config file. Details about nvblox mapping parameters could be found at mapper parameters.

Recording Data with RealSense

To record RealSense data for nvblox:

  1. Connect the camera, start the Docker container and source the workspace as explained in RealSense Camera Examples.

  2. Start recording:

    ros2 launch nvblox_examples_bringup record_realsense.launch.py
    
  3. Stop the recording when done

  4. The resulting ROSbag can be run using the instructions above.

Reconstruction With People Segmentation

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_nvblox/realsense_nvblox_humans.gif/

This tutorial demonstrates how to perform dynamic people reconstruction using people segmentation models in nvblox using RealSense data. For more information on how people reconstruction using people segmentation models works, see Technical Details.

Note

If you are on a desktop machine, we recommend using the PeopleSemSegNet_Vanilla. On Jetson platforms we recommend the lighter PeopleSemSegNet_ShuffleSeg model that is provided in Isaac ROS Image Segmentation for better segmentation performance.

  1. Download and install the PeopleSemSegNet model assets:

    sudo apt-get update
    
    sudo apt-get install -y ros-humble-isaac-ros-peoplesemseg-models-install &&
    ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula &&
    ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula
    
  2. Below we provide run instructions for both the full and light segmentation models (PeopleSemSegNet_Vanilla and PeopleSemSegNet_ShuffleSeg) respectively, running from both a ROSbag and live from a RealSense camera.

    ros2 launch nvblox_examples_bringup realsense_example.launch.py \
    mode:=people_segmentation
    

Reconstruction With People Detection

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_nvblox/1realsense_detection_galileo.gif/

This tutorial demonstrates how to perform dynamic people reconstruction using people detection model in nvblox using RealSense data. For more information on how people reconstruction using people detection model works, see Technical Details.

  1. Download and install the PeopleNet model assets:

    sudo apt-get update
    
    sudo apt-get install -y ros-humble-isaac-ros-peoplenet-models-install
    ros2 run isaac_ros_peoplenet_models_install install_peoplenet_amr_rs.sh --eula
    
  2. Below we provide run instructions for people detection model (PeopleNet) running from both a ROSbag and live from a RealSense camera. Live from RealSense camera assumes camera is started successfully as instructions above.

    ros2 launch nvblox_examples_bringup realsense_example.launch.py \
    mode:=people_detection
    

Reconstruction With Dynamic Scene Elements

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_nvblox/realsense_dynamic_example.gif/

This tutorial demonstrates how build a reconstruction with dynamic elements in the scene (people and non-people) using RealSense data. For more information about how dynamic reconstruction works in nvblox see Technical Details.

  1. Below we provide run instructions for running from both a ROSbag and live from a RealSense camera.

    ros2 launch nvblox_examples_bringup realsense_example.launch.py \
    mode:=dynamic
    

Visualizing in Foxglove

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_nvblox/realsense_in_foxglove.gif/

The examples in previous sections on this page have used rviz for visualization. RViz is our default visualization tool in the case that nvblox is running on the same computer that is displaying the visualization. In the case that you’d like to visualize a reconstruction streamed from a remote machine, for example a robot, our recommended method is to use Foxglove.

To visualize with foxglove please see Foxglove Visualization. Ensure that you additionally install the nvblox Foxglove extension. The animation above shows the results of visualizing the /nvblox_node/mesh and /nvblox/static_esdf_pointcloud topics.

Each of the examples above expose a parameter to enable visualization in foxglove. For example to run the RealSense Example above, on a live sensor, using foxglove instead of RViz run:

ros2 launch nvblox_examples_bringup realsense_example.launch.py \
run_foxglove:=True run_rviz:=False

Note

When visualizing from a remote machine over WiFi, bandwidth is limited and easily exceeded. Exceeding this bandwidth can lead to poor visualization results. For best results we recommend visualizing a limited number of topics, and to avoiding visualizing high-bandwidth topics for example images. Furthermore, it is necessary to limit bandwidth of the mesh transmitted by nvblox. Nvblox exposes a parameter for this purpose layer_streamer_bandwidth_limit_mbps. When visualizing over WiFi we recommend setting this to 30 here.

Nvblox with Multiple RealSense

See our Mutli-RealSense Tutorial.

Troubleshooting

See RealSense Issues.