isaac_ros_foundationstereo#
Source code available on GitHub.
Quickstart#
Set Up Development Environment#
Set up your development environment by following the instructions in getting started.
(Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.
Note
We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the development environment during installation, which will interrupt the quickstart process.
Download Quickstart Assets#
Download quickstart data from NGC:
Make sure required libraries are installed.
sudo apt-get install -y curl jq tar
Then, run these commands to download the asset from NGC:
NGC_ORG="nvidia" NGC_TEAM="isaac" PACKAGE_NAME="isaac_ros_foundationstereo" NGC_RESOURCE="isaac_ros_foundationstereo_assets" NGC_FILENAME="quickstart.tar.gz" MAJOR_VERSION=4 MINOR_VERSION=0 VERSION_REQ_URL="https://catalog.ngc.nvidia.com/api/resources/versions?orgName=$NGC_ORG&teamName=$NGC_TEAM&name=$NGC_RESOURCE&isPublic=true&pageNumber=0&pageSize=100&sortOrder=CREATED_DATE_DESC" AVAILABLE_VERSIONS=$(curl -s \ -H "Accept: application/json" "$VERSION_REQ_URL") LATEST_VERSION_ID=$(echo $AVAILABLE_VERSIONS | jq -r " .recipeVersions[] | .versionId as \$v | \$v | select(test(\"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\")) | split(\".\") | {major: .[0]|tonumber, minor: .[1]|tonumber, patch: .[2]|tonumber} | select(.major == $MAJOR_VERSION and .minor <= $MINOR_VERSION) | \$v " | sort -V | tail -n 1 ) if [ -z "$LATEST_VERSION_ID" ]; then echo "No corresponding version found for Isaac ROS $MAJOR_VERSION.$MINOR_VERSION" echo "Found versions:" echo $AVAILABLE_VERSIONS | jq -r '.recipeVersions[].versionId' else mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets && \ FILE_REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/\ versions/$LATEST_VERSION_ID/files/$NGC_FILENAME" && \ curl -LO --request GET "${FILE_REQ_URL}" && \ tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets && \ rm ${NGC_FILENAME} fi
Build isaac_ros_foundationstereo#
Activate the Isaac ROS environment:
isaac-ros activateInstall the prebuilt Debian package:
sudo apt-get update
sudo apt-get install -y ros-jazzy-isaac-ros-foundationstereo && \ sudo apt-get install -y ros-jazzy-isaac-ros-foundationstereo-models-install
Download and install the pre-trained FoundationStereo model files:
sudo apt-get update
ros2 run isaac_ros_foundationstereo_models_install install_foundationstereo_models.sh --eula \ --model_res high_res
Note
FoundationStereo supports two fixed resolution configurations:
high_res: 576x960 resolution (default)low_res: 320x736 resolution (original training resolution)
You can set the default model resolution using the
FOUNDATIONSTEREO_MODEL_RESenvironment variable. Use the--model_resargument to override the default or to explicitly select between these options.
Install Git LFS:
sudo apt-get install -y git-lfs && git lfs install
Clone this repository under
${ISAAC_ROS_WS}/src:cd ${ISAAC_ROS_WS}/src && \ git clone -b release-4.0 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_stereo_depth.git isaac_ros_dnn_stereo_depth
Activate the Isaac ROS environment:
isaac-ros activateUse
rosdepto install the package’s dependencies:sudo apt-get update
rosdep update && rosdep install --from-paths ${ISAAC_ROS_WS}/src/isaac_ros_dnn_stereo_depth/isaac_ros_foundationstereo --ignore-src -y
Download and install the pre-trained FoundationStereo model files:
sudo apt-get install -y ros-jazzy-isaac-ros-foundationstereo-models-install && \ ros2 run isaac_ros_foundationstereo_models_install install_foundationstereo_models.sh --eula \ --model_res high_res
Note
FoundationStereo supports two fixed resolution configurations:
high_res: 576x960 resolution (default)low_res: 320x736 resolution (original training resolution)
You can set the default model resolution using the
FOUNDATIONSTEREO_MODEL_RESenvironment variable. Use the--model_resargument to override the default or to explicitly select between these options.Build the package from source:
cd ${ISAAC_ROS_WS} && \ colcon build --packages-up-to isaac_ros_foundationstereo --base-paths ${ISAAC_ROS_WS}/src/isaac_ros_dnn_stereo_depth/isaac_ros_foundationstereo
Source the ROS workspace:
Note
Make sure to repeat this step in every terminal created inside the Isaac ROS environment.
Because this package was built from source, the enclosing workspace must be sourced for ROS to be able to find the package’s contents.
source install/setup.bash
Run Launch File#
Continuing inside the Isaac ROS environment, install the following dependencies:
sudo apt-get update
sudo apt-get install -y ros-jazzy-isaac-ros-examples
Run the following launch file to spin up a demo using the quickstart rosbag:
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=foundationstereo \ engine_file_path:=${ISAAC_ROS_WS:?}/isaac_ros_assets/models/foundationstereo/deployable_foundation_stereo_small_v1.0/foundationstereo_576x960.engine
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=foundationstereo \ engine_file_path:=${ISAAC_ROS_WS:?}/isaac_ros_assets/models/foundationstereo/deployable_foundation_stereo_small_v1.0/foundationstereo_320x736.engine \ model_input_width:=736 model_input_height:=320
Open a second terminal and attach to the container:
isaac-ros activate
In the second terminal, play the FoundationStereo sample rosbag downloaded in the quickstart assets:
ros2 bag play -l ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_foundationstereo/rosbags/foundationstereo_rosbag \ --remap /left/camera_info:=/left/camera_info_rect /right/camera_info:=/right/camera_info_rect
Note
This tutorial requires a compatible RealSense camera from the list of available cameras.
Ensure that you have already set up your RealSense camera using the RealSense setup tutorial. If you have not, set up the sensor and then restart this quickstart from the beginning.
Continuing inside the Isaac ROS environment, install the following dependencies:
sudo apt-get update
sudo apt-get install -y ros-jazzy-isaac-ros-examples ros-jazzy-isaac-ros-realsense
Complete steps to set up the FoundationStereo model as described in the quickstart.
Continuing inside the Isaac ROS environment, run the following launch file to spin up a demo using a RealSense stereo camera:
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=realsense_stereo_rect,foundationstereo \ engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationstereo/deployable_foundation_stereo_small_v1.0/foundationstereo_576x960.engine
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=realsense_stereo_rect,foundationstereo \ engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationstereo/deployable_foundation_stereo_small_v1.0/foundationstereo_320x736.engine \ model_input_width:=736 model_input_height:=320
Ensure that you have already set up your ZED camera using ZED setup tutorial.
Continuing inside the Isaac ROS environment, install dependencies:
sudo apt-get update
sudo apt-get install -y ros-jazzy-isaac-ros-examples ros-jazzy-isaac-ros-depth-image-proc ros-jazzy-isaac-ros-stereo-image-proc ros-jazzy-isaac-ros-zed
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=zed_stereo_rect,foundationstereo \ engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationstereo/deployable_foundation_stereo_small_v1.0/foundationstereo_576x960.engine \ interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_foundationstereo/zed2_quickstart_interface_specs.json
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=zed_stereo_rect,foundationstereo \ engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationstereo/deployable_foundation_stereo_small_v1.0/foundationstereo_320x736.engine \ interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_foundationstereo/zed2_quickstart_interface_specs.json \ model_input_width:=736 model_input_height:=320
Note
If you are using the ZED X series, replace zed2_quickstart_interface_specs.json with zedx_quickstart_interface_specs.json in the above commands.
Visualize Output#
Open a terminal and attach to the container:
isaac-ros activate
In the terminal, visualize and validate the disparity output using the visualizer script:
ros2 run isaac_ros_foundationstereo isaac_ros_foundationstereo_visualizer.pyThe example result is:
![]()
Connect Foxglove Studio and setup an Image panel to visualize depth image using topic /depth.
Try More Examples#
To continue your exploration, check out the following suggested examples:
Isaac ROS Troubleshooting#
For solutions to problems with Isaac ROS, refer to Troubleshooting.
API#
Overview#
The isaac_ros_foundationstereo package offers functionality to generate a stereo
disparity map from stereo images using a trained FoundationStereo model. Given a pair
of stereo input images, the package generates a continuous disparity
image for the left input image. The package consists of the following node:
FoundationStereoDecoderNode: Processes the model output and generates the disparity map
Usage#
ros2 launch isaac_ros_foundationstereo isaac_ros_foundationstereo.launch.py engine_file_path:=<your FoundationStereo engine plan absolute path>
FoundationStereoDecoderNode#
ROS Parameters#
ROS Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
string |
“disparity” |
Name of the disparity tensor in the input message. |
|
double |
0.0 |
Minimum disparity value for filtering. |
|
double |
10000.0 |
Maximum disparity value for filtering. |
ROS Topics Subscribed#
ROS Topic |
Interface |
Description |
|---|---|---|
|
Input tensor containing the disparity data. |
|
|
The right camera model. |
ROS Topics Published#
ROS Topic |
Interface |
Description |
|---|---|---|
|
The processed disparity image. |
Input Restrictions#
The input left and right images must have the same dimension and resolution, and the resolution must be divisible by 32.
Output Interpretations#
The
isaac_ros_foundationstereopackage outputs a disparity image with dimension same as the FoundationStereo model output dimension.The input images are rescaled and padded to the FoundationStereo model input dimension before inferencing. The disparity output is published as a continuous disparity map.
The disparity output is filtered to remove invalid values: - Values below
min_disparity(default: 0.0) - Values abovemax_disparity(default: 10000.0) - Invalid regions (inf, Nan) are set to 0.0The right
CameraInfois used to composite aNitrosDisparityImage. If you only care about the disparity image, and don’t need the baseline and focal length information, you can pass dummy camera messages.