Tutorial: Running Isaac Perceptor on RGBD Cameras

This page contains tutorials for running nvblox together with Visual SLAM on a RGBD camera.

This tutorial requires one or more RGBD cameras publishing depth and color images in ROS.

cuVSLAM can be used for odometry if there are stereo cameras that satisfy additional requirements. Alternatively a different odometry source can be used, such as wheel odometry.

Camera Prerequisites

Multiple RGBD Cameras

If more than one RGBD camera is used, they will need to be calibrated in relation to each other, and have their coordinate frames be connected in /tf_static.

Odometry Coordinate Frame

If cuVSLAM is not used for odometry, the odometry source needs to generate the necessary transforms to have odometry coordinate frame connected to the RGBD cameras.

To enable cuVSLAM-based odometry, the following additional requirements must be met:

Stereo Camera Frames

The input for cuVSLAM must consist of accurately calibrated stereo camera frames.

No Projection Pattern

Stereo images provided to cuVSLAM should not have any projection pattern. Stereo depth cameras using projectors need to turn off their projectors to utilize cuVSLAM. If a projection pattern is necessary for accurate depth, one option is to alternate the projection on and off for each frame. In this setup, all odd frames with the projector turned on and accurate depth are published to the camera topics used by nvblox, while all even frames with the projector turned off are published to the camera topics used by cuVSLAM.

Multi-Camera Time Synchronization

If using more than one stereo camera for cuVSLAM, they should all capture data at the same time (have time capture difference error be less than 1ms).

Time Accuracy

Timestamps for the images needs to be relatively accurate to other image timestamps, including prior frame and next frame. Timestamp should accurately represent capture time and have less than 1ms error. For a camera going at 30fps, the frame-to-frame timestamp difference is expected to be 33.3ms +-1ms.

Frame Drops

Time between frames must not exceed 100 ms (maximum of 2 consecutive frames dropped at 30 fps).

Prerequisite Steps

  1. Complete the nvblox quickstart.

  2. If using cuVSLAM, complete the isaac_ros_visual_slam quickstart.

  3. Clone isaac_perceptor repository under ${ISAAC_ROS_WS}/src:

    cd ${ISAAC_ROS_WS}/src
    git clone --recursive -b release-3.2 https://github.com/NVIDIA-ISAAC-ROS/isaac_perceptor.git isaac_perceptor
    
  4. Launch the Docker container using the run_dev.sh script (if not already launched):

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  5. Start the RGBD cameras to publish topics in ROS. Follow camera vendors instructions. Alternatively play a ROSbag:

    ros2 bag play /path/to/rosbag
    
  6. Check the topics being published by the cameras (inside the docker container):

    ros2 topic list
    

    Will list the topics, and you can check the frame rate of a particular topic with:

    ros2 topic hz --window 30 /TOPIC/NAME
    
  7. Make or update your rgbd_perceptor.yaml parameter file to have correct topic names and frame Ids. In src/isaac_perceptor/isaac_ros_perceptor_bringup/params there is an example_rgbd_perceptor.yaml that you can make a copy of and do the modifications:

    cd ${ISAAC_ROS_WS}/src/isaac_perceptor/isaac_ros_perceptor_bringup/params
    cp example_rgbd_perceptor.yaml my_rgbd_perceptor.yaml
    

    Edit my_rgbd_perceptor.yaml:

    • Under nvblox_config modify the topic names for depth_images and color_images. These topic names should appear in ros2 topic list and have data coming in at expected frame rate when checked with ros2 topic hz --window 30 /TOPIC/NAME.

      Each camera should end up with these parameters specified under remappings::

          - depth:
              image: '/some_camera/depth/image'
              info: '/some_camera/depth/camera_info'
            color:
              image: '/some_camera/color/image'
              info: '/some_camera/color/camera_info'
      #
      
    • If using cuVSLAM, modify the cuvslam_config section to have topic names and optical_frame for left and right images under stereo_images section. These topic names should appear in ros2 topic list and have data coming in at expected frame rate when checked with ros2 topic hz --window 30 /TOPIC/NAME.

      Each stereo camera should end up with these parameters specified under stereo_images:

            - left:
                image: '/some_camera/left_ir/image_raw'
                info: '/some_camera/left_ir/camera_info'
                optical_frame: 'some_camera_left_ir_optical_frame'
              right:
                image: '/some_camera/right_ir/image_raw'
                info: '/some_camera/right_ir/camera_info'
                optical_frame: 'some_camera_right_ir_optical_frame'
      #
      

      You can get the optical_frame for an image with ros2 topic echo /some/topic|grep frame_id.

      If not using cuVSLAM delete the entire cuvslam_config section.

    • Under common_config, update the frame Ids for the robot_frame, which is expected to be rigidly attached to the camera frames, and to be on the ground. Also update odom_frame to be the frame that the odometry system produces (or the name of the frame that cuVSLAM will produce for odometry, if cuVSLAM is used).

  8. If you created a new .yaml file, rerun colcon to propagate it to the install directory (inside the docker):

    cd ${ISAAC_ROS_WS}
    colcon build --symlink-install --packages-up-to isaac_ros_perceptor_bringup
    

RGBD Example

This example runs nvblox-based reconstruction from one or more RGBD cameras, either from live data coming directly from the cameras, or from playing a ROSbag.

  1. Start the Isaac ROS Dev Docker container (if not started in the install step)

    cd $ISAAC_ROS_WS && ./src/isaac_ros_common/scripts/run_dev.sh
    
  2. Navigate (inside the docker) to the workspace folder, and source the workspace

    cd ${ISAAC_ROS_WS}
    source install/setup.bash
    
  3. Run multi RGBD example:

    ros2 launch isaac_ros_perceptor_bringup rgbd_perceptor.launch.py config_file:=params/my_rgbd_perceptor.yaml
    

Note

Based the RGBD cameras and how they are mounted to the platform, users are expected to tune ESDF slice height (esdf_slice_min_height, esdf_slice_max_height) specified in their my_rgbd_perceptor.yaml config file. Details about nvblox mapping parameters could be found at mapper parameters.

Recording ROSbag Example

To record a ROSbag, you can use existing tools (such as ros2 bag record), or you can use rgbd_perceptor.launch.py. If you use rgbd_perceptor.launch.py it will record all ROS topics specified for nvblox and cuVSLAM, as well as any topics listed under extra_topics: section, which by default has /tf_static which is expected to have extrinsics calibration. Once you finish Install and RGBD Example steps 1-4 above, you can record as follows:

ros2 launch isaac_ros_perceptor_bringup rgbd_perceptor.launch.py config_file:=params/my_rgbd_perceptor.yaml rosbag_output:=path_to_where_to_save

Visualizing in Foxglove

Foxglove is enabled by default in this launch file. To visualize the reconstruction from a remote machine, for example a robot, our recommended method is to use Foxglove.

To visualize with foxglove please see Foxglove Visualization. Ensure that you additionally install the nvblox Foxglove extension.

Note

When visualizing from a remote machine over WiFi, bandwidth is limited and easily exceeded. Exceeding this bandwidth can lead to poor visualization results. For best results we recommend visualizing a limited number of topics, and to avoiding visualizing high-bandwidth topics for example images. Furthermore, it is necessary to limit bandwidth of the mesh transmitted by nvblox. Nvblox exposes a parameter for this purpose layer_streamer_bandwidth_limit_mbps. When visualizing over WiFi we recommend setting this to 30 here.