isaac_ros_segformer

Source code on GitHub.

Quickstart

Set Up Development Environment

  1. Set up your development environment by following the instructions in getting started.

  2. Clone isaac_ros_common under ${ISAAC_ROS_WS}/src.

    cd ${ISAAC_ROS_WS}/src && \
       git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
    
  3. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.

    Warning

    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.

Download Quickstart Assets

  1. Download quickstart data from NGC:

    Make sure required libraries are installed.

    sudo apt-get install -y curl tar
    

    Then, run these commands to download the asset from NGC.

    NGC_ORG="nvidia"
    NGC_TEAM="isaac"
    NGC_RESOURCE="isaac_ros_assets"
    NGC_VERSION="isaac_ros_segformer"
    NGC_FILENAME="quickstart.tar.gz"
    
    REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/versions/$NGC_VERSION/files/$NGC_FILENAME"
    
    mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/${NGC_VERSION} && \
        curl -LO --request GET "${REQ_URL}" && \
        tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets/${NGC_VERSION} && \
        rm ${NGC_FILENAME}
    

Build isaac_ros_segformer

  1. Launch the Docker container using the run_dev.sh script:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  2. Install the prebuilt Debian package:

    sudo apt-get install -y ros-humble-isaac-ros-segformer
    

Prepare PeopleSemSegFormer Model

  1. Open a new terminal and attach to the container.

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
      ./scripts/run_dev.sh
    
  2. Download the PeopleSemSegFormer ONNX file:

    mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/models/peoplesemsegformer/1 && \
     cd ${ISAAC_ROS_WS}/isaac_ros_assets/models/peoplesemsegformer/1 && \
     wget --content-disposition 'https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/peoplesemsegformer/deployable_v1.0/files?redirect=true&path=peoplesemsegformer.onnx' -O model.onnx
    
  3. Convert the ONNX file to a TensorRT plan file:

    /usr/src/tensorrt/bin/trtexec --onnx=${ISAAC_ROS_WS}/isaac_ros_assets/models/peoplesemsegformer/1/model.onnx --saveEngine=${ISAAC_ROS_WS}/isaac_ros_assets/models/peoplesemsegformer/1/model.plan
    

    Note

    The model conversion time varies across different platforms. On Jetson AGX Orin, the engine conversion process takes ~10-15 minutes to complete.

  4. Create a file called /tmp/models/peoplesemsegformer/config.pbtxt by copying the sample config file:

    cp ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_segformer/peoplesemsegformer_config.pbtxt ${ISAAC_ROS_WS}/isaac_ros_assets/models/peoplesemsegformer/config.pbtxt
    

Run Launch File

  1. Continuing inside the Docker container, install the following dependencies:

    sudo apt-get install -y ros-humble-isaac-ros-examples
    
  2. Run the following launch file to spin up a demo of this package using the quickstart rosbag:

    ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=segformer interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_segformer/quickstart_interface_specs.json model_name:=peoplesemsegformer model_repository_paths:=[${ISAAC_ROS_WS}/isaac_ros_assets/models]
    
  3. Open another terminal and play the ROS bag:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
       ./scripts/run_dev.sh
    
    ros2 bag play -l isaac_ros_assets/isaac_ros_segformer/segformer_sample_data
    

Visualize Results

  1. Open a new terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
       ./scripts/run_dev.sh
    
  2. Visualize and validate the output of the package by launching rqt_image_view:

    ros2 run rqt_image_view rqt_image_view
    

    Then inside the rqt_image_view GUI, change the topic to /segformer/colored_segmentation_mask to view a colorized segmentation mask.

    https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_image_segmentation/isaac_ros_segformer/peoplesemsegformer_output_rqt.png/

    Note

    The raw segmentation mask is also published to /segformer/raw_segmentation_mask. However, the raw pixels correspond to the class labels and so the output is unsuitable for human visual inspection.

Try More Examples

To continue your exploration, check out the following suggested examples:

Troubleshooting

Isaac ROS Troubleshooting

For solutions to problems with Isaac ROS, please check here.

Deep Learning Troubleshooting

For solutions to problems with using DNN models, please check here.

API

Usage

Two launch files are provided to use this package. The first launch file launches isaac_ros_tensor_rt, whereas another one uses isaac_ros_triton, along with the necessary components to perform encoding on images and decoding of Segformer’s output. Please note, Segformer re-utilizes U-Net decoder for decoding the network output.

Warning

For your specific application, these launch files may need to be modified. Please consult the available components to see the configurable parameters.

Launch File

Components Used

isaac_ros_people_sem_segformer_tensor_rt.launch.py

DnnImageEncoderNode, TensorRTNode, UNetDecoderNode

isaac_ros_people_sem_segformer_triton.launch.py

DnnImageEncoderNode, TritonNode, UNetDecoderNode

Note

Isaac ROS Segformer uses UNetDecoderNode for postprocessing and doesn’t have any nodes of its own. Refer Isaac ROS UNet Package for more details.