isaac_ros_stereo_image_proc#
Source code available on GitHub.
Quickstart#
Set Up Development Environment#
Set up your development environment by following the instructions in getting started.
(Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.
Note
We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the development environment during installation, which will interrupt the quickstart process.
Download Quickstart Assets#
Download quickstart data from NGC:
Make sure required libraries are installed.
sudo apt-get install -y curl jq tar
Then, run these commands to download the asset from NGC:
NGC_ORG="nvidia" NGC_TEAM="isaac" PACKAGE_NAME="isaac_ros_stereo_image_proc" NGC_RESOURCE="isaac_ros_stereo_image_proc_assets" NGC_FILENAME="quickstart.tar.gz" MAJOR_VERSION=4 MINOR_VERSION=0 VERSION_REQ_URL="https://catalog.ngc.nvidia.com/api/resources/versions?orgName=$NGC_ORG&teamName=$NGC_TEAM&name=$NGC_RESOURCE&isPublic=true&pageNumber=0&pageSize=100&sortOrder=CREATED_DATE_DESC" AVAILABLE_VERSIONS=$(curl -s \ -H "Accept: application/json" "$VERSION_REQ_URL") LATEST_VERSION_ID=$(echo $AVAILABLE_VERSIONS | jq -r " .recipeVersions[] | .versionId as \$v | \$v | select(test(\"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\")) | split(\".\") | {major: .[0]|tonumber, minor: .[1]|tonumber, patch: .[2]|tonumber} | select(.major == $MAJOR_VERSION and .minor <= $MINOR_VERSION) | \$v " | sort -V | tail -n 1 ) if [ -z "$LATEST_VERSION_ID" ]; then echo "No corresponding version found for Isaac ROS $MAJOR_VERSION.$MINOR_VERSION" echo "Found versions:" echo $AVAILABLE_VERSIONS | jq -r '.recipeVersions[].versionId' else mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets && \ FILE_REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/\ versions/$LATEST_VERSION_ID/files/$NGC_FILENAME" && \ curl -LO --request GET "${FILE_REQ_URL}" && \ tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets && \ rm ${NGC_FILENAME} fi
Build isaac_ros_stereo_image_proc#
Activate the Isaac ROS environment:
isaac-ros activateInstall the prebuilt Debian package:
sudo apt-get update
sudo apt-get install -y ros-jazzy-isaac-ros-image-proc ros-jazzy-isaac-ros-stereo-image-proc
Install Git LFS:
sudo apt-get install -y git-lfs && git lfs install
Clone this repository under
${ISAAC_ROS_WS}/src:cd ${ISAAC_ROS_WS}/src && \ git clone -b release-4.0 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline.git isaac_ros_image_pipeline
Activate the Isaac ROS environment:
isaac-ros activateUse
rosdepto install the package’s dependencies:sudo apt-get update
rosdep update && rosdep install --from-paths ${ISAAC_ROS_WS}/src/isaac_ros_image_pipeline/isaac_ros_stereo_image_proc --ignore-src -y
Build the package from source:
cd ${ISAAC_ROS_WS}/ && \ colcon build --symlink-install --packages-up-to isaac_ros_stereo_image_proc --base-paths ${ISAAC_ROS_WS}/src/isaac_ros_image_pipeline/isaac_ros_stereo_image_proc
Source the ROS workspace:
Note
Make sure to repeat this step in every terminal created inside the Isaac ROS environment.
Because this package was built from source, the enclosing workspace must be sourced for ROS to be able to find the package’s contents.
source install/setup.bash
Run Launch File#
Ensure that you have already set up your RealSense camera using the RealSense setup tutorial. If you have not, please set up the sensor and then restart this quickstart from the beginning.
Continuing inside the container, install the following dependencies:
sudo apt-get update
sudo apt-get install -y ros-jazzy-isaac-ros-examples ros-jazzy-isaac-ros-realsense ros-jazzy-isaac-ros-depth-image-proc ros-jazzy-isaac-ros-image-proc
Run the launch file, which launches the example, and wait for 10 seconds.
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=realsense_stereo_rect,disparity,disparity_to_depth,point_cloud_xyz
Observe point cloud output
/pointson a separate terminal with the command:ros2 topic echo /points
Ensure that you have already set up your ZED camera using ZED setup tutorial.
Continuing inside the Isaac ROS environment, install dependencies:
sudo apt-get update
sudo apt-get install -y ros-jazzy-isaac-ros-examples ros-jazzy-isaac-ros-depth-image-proc ros-jazzy-isaac-ros-image-proc ros-jazzy-isaac-ros-zed
Run the following launch file to spin up a demo of this package using a ZED Camera:
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py \ launch_fragments:=zed_stereo_rect,disparity,disparity_to_depth,point_cloud_xyz \ interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_stereo_image_proc/zed2_quickstart_interface_specs.json
Note
If you are using the ZED X series, replace zed2_quickstart_interface_specs.json with zedx_quickstart_interface_specs.json in the above command.
Observe point cloud output
/pointson a separate terminal with the command:ros2 topic echo /points
Try More Examples#
To continue your exploration, check out the following suggested examples:
API#
Overview#
The isaac_ros_stereo_image_proc package offers functionality for
handling image pairs from a binocular/stereo camera setup, calculating
the disparity between the two images, and producing a point cloud with
depth information. It largely replaces the stereo_image_proc
package.
Available Components#
DisparityNode#
ROS Parameters#
ROS Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
|
|
The VPI backend to use. Choices are: |
|
|
|
The maximum disparity value per pixel. With JETSON backend, this value must be |
|
|
|
Confidence threshold for the VPI SGM algorithm. |
|
|
|
Confidence type used by the VPI SGM algorithm. Valid values: |
|
|
|
Window size for SGM disparity calculation. |
|
|
|
Number of SGM passes to compute the result. |
|
|
|
Penalty on disparity changes of +/- 1 between neighboring pixels. |
|
|
|
Penalty on disparity changes of more than 1 between neighboring pixels. |
|
|
|
Alpha used to scale |
|
|
|
Quality of disparity output. |
ROS Topics Subscribed#
ROS Topic |
Interface |
Description |
|---|---|---|
|
Left rectified image. |
|
|
Left camera info. |
|
|
Right rectified image. |
|
|
Right camera info. |
ROS Topics Published#
ROS Topic |
Interface |
Description |
|---|---|---|
|
Disparity image. |
Note
DisparityNode with the JETSON backend requires a max_disparity value of 128 or 256.
In addition, the JETSON backend requires nv12 input image format. You can use the ImageFormatConverterNode to convert the input to nv12.
Note
For optimal performance on NVIDIA Thor using the JETSON backend by configuring DisparityNode parameters to:
backend:JETSONmax_disparity:256(128also supported depending on your range)confidence_threshold:32767confidence_type:2(inference)window_size:5num_passes:3p1:3p2:48p2_alpha:0quality:6
PointCloudNode#
ROS Parameters#
ROS Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
|
|
Whether the output point cloud should be colorized. |
|
|
|
Scale applied to the XYZ values of the point cloud. |
|
|
|
Height used to size internal buffers for the output point cloud. |
|
|
|
Width used to size internal buffers for the output point cloud. |
ROS Topics Subscribed#
ROS Topic |
Interface |
Description |
|---|---|---|
|
Left rectified image used for color. |
|
|
Left camera info. |
|
|
Right camera info. |
|
|
Disparity image input. |
ROS Topics Published#
ROS Topic |
Interface |
Description |
|---|---|---|
|
Output point cloud. |
DisparityToDepthNode#
ROS Parameters#
This component has no ROS parameters.
ROS Topics Subscribed#
ROS Topic |
Interface |
Description |
|---|---|---|
|
Disparity image input. |
ROS Topics Published#
ROS Topic |
Interface |
Description |
|---|---|---|
|
Depth image output. |