Source code on GitHub.


Set Up Development Environment

  1. Set up your development environment by following the instructions in getting started.

  2. Clone isaac_ros_common under ${ISAAC_ROS_WS}/src.

    cd ${ISAAC_ROS_WS}/src && \
       git clone
  3. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.


    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.

Build isaac_ros_stereo_image_proc

  1. Launch the Docker container using the script:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
  2. Install the prebuilt Debian package:

    sudo apt-get install -y  ros-humble-isaac-ros-image-proc ros-humble-isaac-ros-stereo-image-proc

Run Launch File

  1. Ensure that you have already set up your RealSense camera using the RealSense setup tutorial. If you have not, please set up the sensor and then restart this quickstart from the beginning.

  2. Continuing inside the container, install the following dependencies:

    sudo apt-get install -y ros-humble-isaac-ros-examples ros-humble-isaac-ros-realsense ros-humble-isaac-ros-depth-image-proc
  3. Run the launch file, which launches the example, and wait for 10 seconds.

    ros2 launch isaac_ros_examples launch_fragments:=realsense_stereo_rect,disparity,disparity_to_depth,point_cloud_xyz
  4. Observe point cloud output /points on a separate terminal with the command:

    ros2 topic echo /points


For RealSense camera package issues, please refer to the section here.

Other supported cameras can be found here.

Try More Examples

To continue your exploration, check out the following suggested examples:



The isaac_ros_stereo_image_proc package offers functionality for handling image pairs from a binocular/stereo camera setup, calculating the disparity between the two images, and producing a point cloud with depth information. It largely replaces the stereo_image_proc package.

Available Components


Topics Subscribed

Topics Published



left/image_rect, left/camera_info: The left camera stream right/image_rect, right/camera_info: The right camera stream

disparity: The disparity between the two cameras

max_disparity: The maximum value for disparity per pixel, which is 64 by default. With ORIN backend, this value must be 128 or 256. backends: The VPI backend to use, which is CUDA by default (options: “CUDA”, “XAVIER”, “ORIN”) confidence_threshold: The confidence threshold for VPI SGM algorithm window_size: The window size for SGM disparity calculation num_passes: The number of passes SGM takes to compute result p1: Penalty on disparity changes of +/- 1 between neighbor pixels p2: Penalty on disparity changes of more than 1 between neighbor pixels p2_alpha: Alpha for P2 quality: Quality of disparity output. It’s only applicable when using XAVIER backend. The higher the value, better the quality and possibly slower performance. Refer tothis VPI docfor more details on the parameters


left/image_rect_color: The coloring for the point cloud left/camera_info: The left camera info right/camera_info: The right camera info disparity The disparity between the two cameras

points: The output point cloud

use_color: Whether or not the output point cloud should have color. The default value is false. unit_scaling: The amount to scale the xyz points by


disparity The disparity image

depth The resultant depth image



DisparityNode with the ORIN backend requires a max_disparity value of 128 or 256, but the default value is 64. Besides, the ORIN backend requires nv12 input image format, you can use the ImageFormatConverterNode to convert the input to nv12 format.