Source code on GitHub.


Set Up Development Environment

  1. Set up your development environment by following the instructions in getting started.

  2. Clone isaac_ros_common under ${ISAAC_ROS_WS}/src.

    cd ${ISAAC_ROS_WS}/src && \
       git clone
  3. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.


    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.

Download Quickstart Assets

  1. Download quickstart data from NGC:

    Make sure required libraries are installed.

    sudo apt-get install -y curl tar

    Then, run these commands to download the asset from NGC.

    mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/${NGC_VERSION} && \
        curl -LO --request GET "${REQ_URL}" && \
        tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets/${NGC_VERSION} && \
        rm ${NGC_FILENAME}

Build isaac_ros_segformer

  1. Launch the Docker container using the script:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
  2. Install the prebuilt Debian package:

    sudo apt-get install -y ros-humble-isaac-ros-segformer

Prepare PeopleSemSegFormer Model

  1. Open a new terminal and attach to the container.

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
  2. Download the PeopleSemSegFormer ONNX file:

    mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/models/peoplesemsegformer/1 && \
     cd ${ISAAC_ROS_WS}/isaac_ros_assets/models/peoplesemsegformer/1 && \
     wget --content-disposition '' -O model.onnx
  3. Convert the ONNX file to a TensorRT plan file:

    /usr/src/tensorrt/bin/trtexec --onnx=${ISAAC_ROS_WS}/isaac_ros_assets/models/peoplesemsegformer/1/model.onnx --saveEngine=${ISAAC_ROS_WS}/isaac_ros_assets/models/peoplesemsegformer/1/model.plan


    The model conversion time varies across different platforms. On Jetson AGX Orin, the engine conversion process takes ~10-15 minutes to complete.

  4. Create a file called /tmp/models/peoplesemsegformer/config.pbtxt by copying the sample config file:

    cp ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_segformer/peoplesemsegformer_config.pbtxt ${ISAAC_ROS_WS}/isaac_ros_assets/models/peoplesemsegformer/config.pbtxt

Run Launch File

  1. Continuing inside the Docker container, install the following dependencies:

    sudo apt-get install -y ros-humble-isaac-ros-examples
  2. Run the following launch file to spin up a demo of this package using the quickstart rosbag:

    ros2 launch isaac_ros_examples launch_fragments:=segformer interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_segformer/quickstart_interface_specs.json model_name:=peoplesemsegformer model_repository_paths:=[${ISAAC_ROS_WS}/isaac_ros_assets/models]
  3. Open another terminal and play the ROS bag:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ros2 bag play -l isaac_ros_assets/isaac_ros_segformer/segformer_sample_data

Visualize Results

  1. Open a new terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
  2. Visualize and validate the output of the package by launching rqt_image_view:

    ros2 run rqt_image_view rqt_image_view

    Then inside the rqt_image_view GUI, change the topic to /segformer/colored_segmentation_mask to view a colorized segmentation mask.


    The raw segmentation mask is also published to /segformer/raw_segmentation_mask. However, the raw pixels correspond to the class labels and so the output is unsuitable for human visual inspection.

Try More Examples

To continue your exploration, check out the following suggested examples:


Isaac ROS Troubleshooting

For solutions to problems with Isaac ROS, please check here.

Deep Learning Troubleshooting

For solutions to problems with using DNN models, please check here.



Two launch files are provided to use this package. The first launch file launches isaac_ros_tensor_rt, whereas another one uses isaac_ros_triton, along with the necessary components to perform encoding on images and decoding of Segformer’s output. Please note, Segformer re-utilizes U-Net decoder for decoding the network output.


For your specific application, these launch files may need to be modified. Please consult the available components to see the configurable parameters.

Launch File

Components Used

DnnImageEncoderNode, TensorRTNode, UNetDecoderNode

DnnImageEncoderNode, TritonNode, UNetDecoderNode


Isaac ROS Segformer uses UNetDecoderNode for postprocessing and doesn’t have any nodes of its own. Refer Isaac ROS UNet Package for more details.