Multi-RealSense Camera Examples#

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/release-4.0/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_nvblox/multi_realsense_galileo.gif/

This page contains tutorials for running nvblox with multiple RealSense cameras.

Install#

  1. Complete the installation instructions from the single RealSense tutorial.

Multi-RealSense Camera Setup#

  1. Activate the Isaac ROS environment (if not started in the install step)

    isaac-ros activate
    
  2. For each RealSense camera, plug in one RealSense and unplug all other RealSense cameras.

    rs-enumerate-devices
    

    Up top it will list the device type, Serial Number, and Firmware Version. Write down the Serial Number for this RealSense. Example:

    Device info:
        Name                          :     Intel RealSense D455
        Serial Number                 :     151422250659
        Firmware Version              :     5.16.0.1
    

3. Add a calibration URDF file to specify the transformation between base_link and each RealSense camera. The calibration example file is stored in a default location URDF file. Nvblox will use it without requiring any additional arguments. Alternatively, you could create a camera calibration URDF file, which can then be passed with multicam_urdf_path as,

ros2 launch nvblox_examples_bringup realsense_example.launch.py \
multicam_urdf_path:=<"urdf_nominals_file_path">

RealSense Example#

This example runs nvblox-based reconstruction from multiple RealSense cameras, either from live data coming directly off RealSense cameras, or from recorded data coming from a ROSbag.

  1. Activate the Isaac ROS environment (if not started in the install step)

    isaac-ros activate
    
  2. Navigate (inside the Isaac ROS environment) to the workspace folder, and source the workspace

    cd /workspaces/isaac_ros-dev
    source install/setup.bash
    
  3. Run the RealSense example, either live from cameras or from a recorded ROSbag.

    1. Start cameras. Provide the serial numbers identified above as a list separated by comma (camera_serial_numbers), and the number of cameras identified as an integer (num_cameras), for example:

    ros2 launch nvblox_examples_bringup realsense.launch.py \
    run_standalone:=True \
    camera_serial_numbers:='211523062311,151223061441,151422251043,215122256933' \
    container_name:='nvblox_container' \
    num_cameras:=4
    
    1. Switch to another terminal and start the Isaac ROS environment. For each camera (identified by the index of the camera INDEX), ensure it publishing topics at expected frequency. 15Hz for color, and 60Hz for the first camera’s depth (identified by the order of camera_serial_numbers input in the previous step), and 30Hz for other camera’s depth. Otherwise restart the cameras in the last step until they meet the expected rates.

    ros2 topic hz /camera${INDEX}/color/image_raw
    ros2 topic hz /camera${INDEX}/depth/image_rect_raw
    
    1. Switch to another terminal and start the Isaac ROS environment. Provide the number of cameras identified as an integer (num_cameras) and launch nvblox as,

    ros2 launch nvblox_examples_bringup realsense_example.launch.py \
    num_cameras:=4 \
    mode:=static attach_to_container:=True \
    container_name:='nvblox_container' \
    run_realsense:=False
    

Note

If you want to restrict odometry to a 2D plane (for example, to run a robot in a flat environment), you can use the enable_ground_constraint_in_odometry argument.

Note

Based on how RealSense cameras are mounted to the platform and calibrated, users are expected to tune ESDF slice height specified in nvblox config file. Details about nvblox mapping parameters could be found at mapper parameters.

Warning

The RealSense camera emitter state might be invalid and the splitter will filter the opposite images to cuVSLAM and nvblox. This may lead to poor tracking and reconstruction. See the RealSense troubleshooting section for more details.

Recording Data with RealSense#

To record RealSense data for nvblox:

  1. Connect the camera, activate the Isaac ROS environment and source the workspace as explained in RealSense Camera Examples.

  2. Start the cameras. Provide the serial numbers identified above as a list separated by comma (camera_serial_numbers), and the number of cameras identified as an integer (num_cameras`), for example:

    ros2 launch nvblox_examples_bringup realsense.launch.py \
    run_standalone:=True \
    camera_serial_numbers:='211523062311,151223061441,151422251043,215122256933' \
    container_name:='nvblox_container' \
    num_cameras:=4
    
  3. Switch to another terminal and start the Isaac ROS environment. For each camera (identified by the index of the camera INDEX), ensure it publishing topics at expected frequency. 15Hz for color, and 60Hz for the first camera’s depth (identified by the order of camera_serial_numbers in the previous step), and 30Hz for other camera’s depth. Otherwise restart the cameras in the last step until they meet the expected rates.

    ros2 topic hz /camera${INDEX}/color/image_raw
    ros2 topic hz /camera${INDEX}/depth/image_rect_raw
    
  4. Switch to another terminal and start the Isaac ROS environment. Provide the serial numbers identified above as a list separated by comma (camera_serial_numbers), and the number of cameras identified as an integer (num_cameras`). Start recording as,

    ros2 launch nvblox_examples_bringup record_realsense.launch.py \
    num_cameras:=4 \
    camera_serial_numbers:='211523062311,151223061441,151422251043,215122256933' \
    run_rqt:=False \
    run_realsense:=False
    
  5. Stop the recording and cameras when done.

  6. The resulting ROSbag can be run using the instructions above.

Reconstruction With People Segmentation#

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/release-4.0/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_nvblox/4realsense_segmentation_galileo.gif/

This tutorial demonstrates how to perform dynamic people reconstruction using people segmentation models for systems with multiple RealSense cameras. For more information see the single realsense tutorial.

Note

If you are on a desktop machine, we recommend using the PeopleSemSegNet_Vanilla. On Jetson platforms we recommend the lighter PeopleSemSegNet_ShuffleSeg model that is provided in Isaac ROS Image Segmentation for better segmentation performance.

Note

Provide the number of cameras identified as an integer (num_cameras) when launching nvblox as followings.

  1. Download and install the segmentation models as described in the single RealSense with people segmentation example.

  2. Multi-RealSense launch commands are provided below. Live from RealSense cameras assumes cameras are started successfully as instructions above.

    ros2 launch nvblox_examples_bringup realsense_example.launch.py \
    mode:=people_segmentation \
    num_cameras:=4 \
    attach_to_container:=True \
    container_name:='nvblox_container' \
    run_realsense:=False
    

Reconstruction With People Detection#

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/release-4.0/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_nvblox/4realsense_detection_galileo.gif/

This tutorial demonstrates how to perform dynamic people reconstruction using a detection model for systems with multiple RealSense cameras. For more information see the single RealSense tutorial.

Note

Provide the number of cameras identified as an integer (num_cameras) when launching nvblox as followings.

  1. Download and install the detection models as described in the single RealSense with people detection example.

  2. Multi-RealSense launch commands are provided below. Live from RealSense cameras assumes cameras are started successfully as instructions above.

    ros2 launch nvblox_examples_bringup realsense_example.launch.py \
    mode:=people_detection \
    num_cameras:=4 \
    attach_to_container:=True \
    container_name:='nvblox_container' \
    run_realsense:=False
    

Reconstruction With Dynamic Scene Elements#

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/release-4.0/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_nvblox/4realsense_dynamic_galileo.gif/

This tutorial demonstrates how to perform dynamic people reconstruction using a dynamic detection for systems with multiple RealSense cameras. For more information see the single RealSense tutorial.

Note

Provide the number of cameras identified as an integer (num_cameras) when launching nvblox as followings.

  1. Multi-RealSense launch commands are provided below. Live from RealSense cameras assumes cameras are started successfully as instructions above.

    ros2 launch nvblox_examples_bringup realsense_example.launch.py \
    mode:=dynamic \
    num_cameras:=4 \
    attach_to_container:=True \
    container_name:='nvblox_container' \
    run_realsense:=False
    

Troubleshooting#

See RealSense Issues.