isaac_ros_unet

Source code on GitHub.

Quickstart

Set Up Development Environment

  1. Set up your development environment by following the instructions in getting started.

  2. Clone isaac_ros_common under ${ISAAC_ROS_WS}/src.

    cd ${ISAAC_ROS_WS}/src && \
       git clone -b release-3.2 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
    
  3. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.

    Note

    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.

Download Quickstart Assets

  1. Download quickstart data from NGC:

    Make sure required libraries are installed.

    sudo apt-get install -y curl jq tar
    

    Then, run these commands to download the asset from NGC:

    NGC_ORG="nvidia"
    NGC_TEAM="isaac"
    PACKAGE_NAME="isaac_ros_unet"
    NGC_RESOURCE="isaac_ros_unet_assets"
    NGC_FILENAME="quickstart.tar.gz"
    MAJOR_VERSION=3
    MINOR_VERSION=2
    VERSION_REQ_URL="https://catalog.ngc.nvidia.com/api/resources/versions?orgName=$NGC_ORG&teamName=$NGC_TEAM&name=$NGC_RESOURCE&isPublic=true&pageNumber=0&pageSize=100&sortOrder=CREATED_DATE_DESC"
    AVAILABLE_VERSIONS=$(curl -s \
        -H "Accept: application/json" "$VERSION_REQ_URL")
    LATEST_VERSION_ID=$(echo $AVAILABLE_VERSIONS | jq -r "
        .recipeVersions[]
        | .versionId as \$v
        | \$v | select(test(\"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\"))
        | split(\".\") | {major: .[0]|tonumber, minor: .[1]|tonumber, patch: .[2]|tonumber}
        | select(.major == $MAJOR_VERSION and .minor <= $MINOR_VERSION)
        | \$v
        " | sort -V | tail -n 1
    )
    if [ -z "$LATEST_VERSION_ID" ]; then
        echo "No corresponding version found for Isaac ROS $MAJOR_VERSION.$MINOR_VERSION"
        echo "Found versions:"
        echo $AVAILABLE_VERSIONS | jq -r '.recipeVersions[].versionId'
    else
        mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets && \
        FILE_REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/\
    versions/$LATEST_VERSION_ID/files/$NGC_FILENAME" && \
        curl -LO --request GET "${FILE_REQ_URL}" && \
        tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets && \
        rm ${NGC_FILENAME}
    fi
    

Build isaac_ros_unet

  1. Launch the Docker container using the run_dev.sh script:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  2. Install the prebuilt Debian package:

    sudo apt-get update
    
    sudo apt-get install -y ros-humble-isaac-ros-unet
    

Prepare PeopleSemSegnet Model

  1. Download and install model assets inside the Docker container:

    sudo apt-get install -y ros-humble-isaac-ros-peoplesemseg-models-install &&
    ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula &&
    ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula
    

Run Launch File

  1. Continuing inside the Docker container, install the following dependencies:

    sudo apt-get update
    
    sudo apt-get install -y ros-humble-isaac-ros-examples
    
  2. Run the following launch file to spin up a demo of this package using the quickstart rosbag:

    ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=unet interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_unet/quickstart_interface_specs.json engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/peoplesemsegnet/deployable_quantized_vanilla_unet_onnx_v2.0/1/model.plan input_binding_names:=['input_1:0']
    
  3. Open a second terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  4. Run the rosbag file to simulate an image stream:

    ros2 bag play -l ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_unet/quickstart.bag
    

Note

If you want to use the shuffleseg model, replace the engine_file_path with the shuffleseg engine location, set the input_binding_names to ['input_2'], and set use_planar_input to False.

Visualize Results

  1. Open a new terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
       ./scripts/run_dev.sh
    
  2. Visualize and validate the output of the package with rqt_image_view:

    ros2 run rqt_image_view rqt_image_view /unet/colored_segmentation_mask
    

    After about 1 minute, your output should look like this:

    RQT showing segmentation of people
  3. Visualize the blended image with rqt_image_view:

    ros2 run rqt_image_view rqt_image_view /segmentation_image_overlay
    

    Your output should look like this:

    RQT showing alpha blended image

Try More Examples

To continue your exploration, check out the following suggested examples:

Troubleshooting

Isaac ROS Troubleshooting

For solutions to problems with Isaac ROS, see troubleshooting.

Deep Learning Troubleshooting

For solutions to problems with using DNN models, see troubleshooting deeplearning.

API

Usage

Three launch files are provided to use this package. The first launch file launches isaac_ros_tensor_rt, whereas another one uses isaac_ros_triton, along with the necessary components to perform encoding on images and decoding of U-Net’s output. The final launch file launches an Argus-compatible camera with a rectification node, along with the components found in isaac_ros_unet_triton.launch.py.

Note

For your specific application, these launch files may need to be modified. Please consult the available components to see the configurable parameters.

Launch File

Components Used

isaac_ros_unet_tensor_rt.launch.py

DnnImageEncoderNode, TensorRTNode, UNetDecoderNode

isaac_ros_unet_triton.launch.py

DnnImageEncoderNode, TritonNode, UNetDecoderNode

isaac_ros_argus_unet_triton.launch.py

ArgusMonoNode, RectifyNode, DnnImageEncoderNode, TritonNode, UNetDecoderNode

UNetDecoderNode

ROS Parameters

ROS Parameter

Type

Default

Description

color_segmentation_mask_encoding

string

rgb8

The image encoding of the colored segmentation mask. Supported values: rgb8, bgr8

color_palette

int64_t list

[]

Vector of integers where each element represents the RGB color hex code for the corresponding class

network_output_type

string

softmax

The type of output that the network provides. Supported values: softmax, argmax, sigmoid

mask_width

int16_t

960

The width of the segmentation mask.

mask_height

int16_t

544

The height of the segmentation mask.

Note

  • The model output should be NCHW or NHWC. In this context, the C refers to the class.

  • For the network_output_type, the softmax and sigmoid option expects a single 32 bit floating point tensor. For the argmax option, a single signed 32 bit integer tensor is expected.

  • Models with greater than 255 classes are not supported. If a class label greater than 255 is detected, this mask will be downcast to 255 in the raw segmentation.

ROS Topics Subscribed

ROS Topic

Interface

Description

tensor_sub

isaac_ros_tensor_list_interfaces/TensorList

The tensor that contains raw probabilities for every class in each pixel.

Note

All input images are required to have height and width that are both an even number of pixels.

ROS Topics Published

ROS Topic

Interface

Description

unet/raw_segmentation_mask

sensor_msgs/Image

The raw segmentation mask, encoded in mono8. Each pixel represents a class label.

unet/colored_segmentation_mask

sensor_msgs/Image

The colored segmentation mask. The color palette is user specified by the color_palette parameter.