isaac_ros_cumotion_robot_segmenter#
Source code available on GitHub.
Overview#
The isaac_ros_cumotion_robot_segmenter package provides a ROS 2 node that segments robot parts from depth images to prevent self-collision in perception pipelines. This is essential for manipulation scenarios where the robot’s own structure should not interfere with obstacle detection or object pose estimation.
The node takes depth images and robot joint states as input, and produces segmented depth images where pixels occupied by the robot are masked out. This enables downstream perception algorithms (such as Nvblox or Foundation Pose) to ignore the robot’s own geometry.
Key features:
Support for multiple cameras
Dynamic robot description reloading
Support for both
32FC1(float, in meters) and16UC1(uint16, millimeters) depth image encodings
Quickstart#
Set Up Development Environment#
Set up your development environment by following the instructions in getting started.
(Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.
Note
We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the development environment during installation, which will interrupt the quickstart process.
Build isaac_ros_cumotion_robot_segmenter#
Activate the Isaac ROS environment:
isaac-ros activateInstall the prebuilt Debian package:
sudo apt-get update
sudo apt-get install -y ros-jazzy-isaac-ros-cumotion-robot-segmenter
Install Git LFS:
sudo apt-get install -y git-lfs && git lfs install
Clone this repository under
${ISAAC_ROS_WS}/src:cd ${ISAAC_ROS_WS}/src && \ git clone --recurse-submodules -b release-4.4 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_cumotion.git isaac_ros_cumotion
Activate the Isaac ROS environment:
isaac-ros activateUse
rosdepto install the package’s dependencies:sudo apt-get update
rosdep update && rosdep install --from-paths ${ISAAC_ROS_WS}/src/isaac_ros_cumotion/isaac_ros_cumotion_robot_segmenter --ignore-src -y
Build the package from source:
cd ${ISAAC_ROS_WS} && \ colcon build --symlink-install --packages-up-to isaac_ros_cumotion_robot_segmenter --base-paths ${ISAAC_ROS_WS}/src/isaac_ros_cumotion/isaac_ros_cumotion_robot_segmenter
Source the ROS workspace:
Note
Make sure to repeat this step in every terminal created inside the Isaac ROS environment.
Because this package was built from source, the enclosing workspace must be sourced for ROS to be able to find the package’s contents.
source install/setup.bash
Download Quickstart Assets#
Run this script to download the r2b_robotarm dataset from NGC to ${ISAAC_ROS_WS}/isaac_ros_assets/r2b_2024/r2b_robotarm:
bash $(ros2 pkg prefix --share isaac_ros_cumotion_robot_segmenter)/test/download_rosbag.sh
Run Launch File#
Continuing inside the Isaac ROS environment, run the following launch file to spin up the node:
ros2 launch isaac_ros_cumotion_robot_segmenter robot_segmenter.launch.py \ robot_segmenter.urdf_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_segmenter)/test/test_data/robot.urdf \ robot_segmenter.xrdf_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_segmenter)/test/test_data/robot.xrdf
Open a second terminal inside the Isaac ROS environment:
isaac-ros activateRun the rosbag file to simulate an image stream:
ros2 bag play --clock -l ${ISAAC_ROS_WS}/isaac_ros_assets/r2b_2024/r2b_robotarm \ --remap /camera_1/aligned_depth_to_color/image_raw:=/depth_image \ camera_1/color/camera_info:=/rgb/camera_info
Open a third terminal inside the Isaac ROS environment:
isaac-ros activateRun a static transform publisher to set up the camera-to-robot transform:
ros2 run tf2_ros static_transform_publisher --frame-id base_link --child-frame-id camera_1_infra1_optical_frame \ --x -0.686 --y 0.595 --z 0.996 \ --qx -0.007 --qy 0.901 --qz -0.427 --qw 0.074
Visualize Results#
Open a new terminal inside the Isaac ROS environment:
isaac-ros activateVisualize the robot mask in
RViz:rviz2
Then click on the
Addbutton and selectBy topic. In theBy topicwindow, select the topic/cumotion/camera_1/robot_mask.Please set the
Fixed Frametobase_linkin theRVizwindow.Optionally, you can also add the original depth image to the RViz window to compare the segmented output with the original input.
Note
Due to initial warm-up time, the visualization may take up to 1 minute to appear.
Running Tests#
The package includes integration tests that verify the robot segmentation functionality. Follow these steps to run the tests with quickstart assets downloaded earlier.
Run all the tests with
pytest:export RUN_ROBOT_SEGMENTOR_POL_TEST=true # this will run a test that plays the ROSbag as well. python3 -m pytest $(ros2 pkg prefix --share isaac_ros_cumotion_robot_segmenter)/test
Alternatively, you can run a specific test using
launch_test, for example:launch_test $(ros2 pkg prefix --share isaac_ros_cumotion_robot_segmenter)/test/test_robot_segmenter_float16.py
The test suite includes:
test_robot_segmenter_uint16.py: Tests segmentation with 16-bit unsigned integer depth imagestest_robot_segmenter_float16.py: Tests segmentation with 32-bit floating point depth imagestest_robot_segmenter_reload.py: Tests dynamic robot description reloading functionalitytest_robot_segmenter_pol.py: Tests segmentation with a rosbag playing.
Troubleshooting#
Isaac ROS Troubleshooting#
For solutions to problems with Isaac ROS, see troubleshooting.
Common Issues#
No output published
Verify that all input topics are being published:
ros2 topic list ros2 topic hz /depth_image ros2 topic hz /joint_states
Ensure the TF tree is complete from the camera optical frame to the robot base frame.
Check that joint states contain all required joints for your robot URDF.
Robot not properly masked
Verify the camera-to-robot transform is correct:
ros2 run tf2_ros tf2_echo base_link camera_optical_frame
Check that
urdf_pathandxrdf_pathparameters point to valid files.Increase the
distance_thresholdparameter to expand the masking buffer around robot geometry.
Launch file not supported when running as a standalone node
When running the launch file as a standalone node, the node will not be able to find the composable container.
To fix this, you can either run the launch file from a higher-level bringup that owns the container, such as
cumotion.launch.py, or you can run a modified launch file that creates the composable container itself.
API#
RobotSegmenter#
ROS Parameters#
ROS Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
|
|
Path to the robot’s URDF file (required) |
|
|
|
Path to the robot’s XRDF file (required) |
|
|
|
Name of the robot’s base frame in the TF tree |
|
|
|
Additional buffer distance (in meters) around robot geometry for masking |
|
|
|
Size of the message synchronization queue |
|
|
|
QoS profile for input topic subscriptions. Options: |
|
|
|
QoS profile for output topic publications. Options: |
|
|
|
Enable performance metrics logging for debugging and optimization |
|
|
|
Topic to listen for robot description reload signals |
|
|
|
Service name for fetching updated robot description |
ROS Topics Subscribed#
ROS Topic |
Interface |
Description |
|---|---|---|
|
Input depth image. Supported encodings: |
|
|
Camera intrinsics for the depth camera. Used to project robot model into image space |
|
|
Current joint positions of the robot. Used to determine the robot’s pose for segmentation |
|
|
Signal to trigger robot description reload from service |
ROS Topics Published#
ROS Topic |
Interface |
Description |
|---|---|---|
|
Binary mask indicating robot pixels. Same dimensions as input. Robot pixels are marked as 0, background as max value |
|
|
Segmented depth image with robot pixels removed (set to 0). Same dimensions and encoding as input |
ROS Services Requested#
ROS Service |
Interface |
Description |
|---|---|---|
|
Service to fetch updated URDF and XRDF when robot description reload is triggered |
Launch File Parameters#
robot_segmenter.launch.py#
Launch Argument |
Default |
Description |
|---|---|---|
|
|
Path to the robot’s URDF file |
|
|
Path to the robot’s XRDF file |
|
|
Number of cameras to process |
|
|
List of input depth image topics (one per camera) |
|
|
List of camera info topics (one per camera) |
|
|
List of output robot mask topics (one per camera) |
|
|
List of output segmented depth topics (one per camera) |
|
|
Topic for robot joint states |
|
|
Robot base frame name |
|
|
Buffer distance around robot geometry (meters) |
|
|
Name of the composable node container to load into |
|
|
QoS profile for input topic subscriptions. Options: |
|
|
QoS profile for output topic publications. Options: |
|
|
Enable performance metrics logging for debugging and optimization |
|
|
Topic to listen for robot description reload signals |
|
|
Service name for fetching updated robot description |
|
|
Enable CUDA MPS Service for improved GPU utilization |
|
|
Directory for CUDA MPS pipe (only used when |
|
|
CUDA MPS client priority (only used when |
|
|
CUDA MPS active thread percentage (only used when |