Isaac ROS Image Segmentation


Isaac ROS Image Segmentation contains a ROS 2 package to produce semantic image segmentation. isaac_ros_unet provides a method for classification of an input image at the pixel level. Each pixel of the input image is predicted to belong to a set of defined classes. Classification is performed with GPU acceleration running DNN inference on a U-NET architecture model. The output prediction can be used by perception functions to understand where each class is spatially in a 2D image or fuse with a corresponding depth location in a 3D scene.

A trained model based on the U-NET architecture is required to produce a segmentation mask. Input images may need to be cropped and resized to maintain the aspect ratio and match the input resolution of the U-NET DNN; image resolution may be reduced to improve DNN inference performance, which typically scales directly with the number of pixels in the image.

Image segmentation provides more information and uses more compute than object detection to produce classifications per pixel, whereas object detection classifies a simpler bounding box rectangle in image coordinates. Object detection is used to know if, and where spatially in a 2D image, the object exists. On the other hand, image segmentation is used to know which pixels belong to the class. One application is using the segmentation result, and fusing it with the corresponding depth information in order to know an object location in a 3D scene.


Isaac ROS NITROS Acceleration

This package is powered by NVIDIA Isaac Transport for ROS (NITROS), which leverages type adaptation and negotiation to optimize message formats and dramatically accelerate communication between participating nodes.

DNN models

A U-NET model is required to use isaac_ros_unet. NGC provides pre-trained models for use in your robotics application. NGC pre-trained models can be fine-tuned for your application using TAO used in the examples on this page. PeopleSemSegNet provides a pre-trained model for best-in-class, real-time people segmentation. You can train your own U-NET architecture models or download pre-trained models from one of the many model zoo’s available online for use with isaac_ros_unet.

Click here for more information on how to use NGC models.

This package has been validated against the following NGC models:

Model Name

Use Case

PeopleSemSegNet AMR

Segment people from point of view of a mobile robot

PeopleSemSegNet ShuffleSeg

Semantically segment people inference at a high speed


Semantically segment people


Supported Platforms

This package is designed and tested to be compatible with ROS 2 Humble running on Jetson or an x86_64 system with an NVIDIA GPU.


Versions of ROS 2 earlier than Humble are not supported. This package depends on specific ROS 2 implementation features that were only introduced beginning with the Humble release.






Jetson Orin Jetson Xavier

JetPack 5.1.2

For best performance, ensure that power settings are configured appropriately.



Ubuntu 20.04+ CUDA 11.8+


To simplify development, we strongly recommend leveraging the Isaac ROS Dev Docker images by following these steps. This will streamline your development environment setup with the correct versions of dependencies on both Jetson and x86_64 platforms.


All Isaac ROS Quickstarts, tutorials, and examples have been designed with the Isaac ROS Docker images as a prerequisite.

Customize your Dev Environment

To customize your development environment, reference this guide.





Updated for Isaac ROS 2.0.0.


Performance improvements


Source available GXF extensions


Updated OSS licensing


Update to be compatible with JetPack 5.0.2


Removed frame_id, queue_size and tensor_output_order parameter. Added network_output_type parameter (support for sigmoid and argmax output layers). Switched implementation to use NITROS. Removed support for odd sized images. Switched tutorial to use PeopleSemSegNet ShuffleSeg and moved unnecessary details to other READMEs.


Isaac Sim HIL documentation update


Initial release