isaac_ros_centerpose

Source code on GitHub.

Quickstart

Set Up Development Environment

  1. Set up your development environment by following the instructions in getting started.

  2. Clone isaac_ros_common under ${ISAAC_ROS_WS}/src.

    cd ${ISAAC_ROS_WS}/src && \
       git clone -b release-3.2 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
    
  3. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.

    Note

    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.

Download Quickstart Assets

  1. Download quickstart data from NGC:

    Make sure required libraries are installed.

    sudo apt-get install -y curl jq tar
    

    Then, run these commands to download the asset from NGC:

    NGC_ORG="nvidia"
    NGC_TEAM="isaac"
    PACKAGE_NAME="isaac_ros_centerpose"
    NGC_RESOURCE="isaac_ros_centerpose_assets"
    NGC_FILENAME="quickstart.tar.gz"
    MAJOR_VERSION=3
    MINOR_VERSION=2
    VERSION_REQ_URL="https://catalog.ngc.nvidia.com/api/resources/versions?orgName=$NGC_ORG&teamName=$NGC_TEAM&name=$NGC_RESOURCE&isPublic=true&pageNumber=0&pageSize=100&sortOrder=CREATED_DATE_DESC"
    AVAILABLE_VERSIONS=$(curl -s \
        -H "Accept: application/json" "$VERSION_REQ_URL")
    LATEST_VERSION_ID=$(echo $AVAILABLE_VERSIONS | jq -r "
        .recipeVersions[]
        | .versionId as \$v
        | \$v | select(test(\"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\"))
        | split(\".\") | {major: .[0]|tonumber, minor: .[1]|tonumber, patch: .[2]|tonumber}
        | select(.major == $MAJOR_VERSION and .minor <= $MINOR_VERSION)
        | \$v
        " | sort -V | tail -n 1
    )
    if [ -z "$LATEST_VERSION_ID" ]; then
        echo "No corresponding version found for Isaac ROS $MAJOR_VERSION.$MINOR_VERSION"
        echo "Found versions:"
        echo $AVAILABLE_VERSIONS | jq -r '.recipeVersions[].versionId'
    else
        mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets && \
        FILE_REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/\
    versions/$LATEST_VERSION_ID/files/$NGC_FILENAME" && \
        curl -LO --request GET "${FILE_REQ_URL}" && \
        tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets && \
        rm ${NGC_FILENAME}
    fi
    
  2. Download the CenterPose shoe ONNX file to the models repository directory with version 1:

mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/models/centerpose_shoe/1 && \
  cd ${ISAAC_ROS_WS}/isaac_ros_assets/models/centerpose_shoe/1 && \
  wget --content-disposition 'https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/centerpose/deployable_dla34/files?redirect=true&path=shoe_DLA34.onnx' -O model.onnx
  1. Move the quickstart model configuration file to the model repository.

    cp ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_centerpose/config.pbtxt ${ISAAC_ROS_WS}/isaac_ros_assets/models/centerpose_shoe/config.pbtxt
    

    Warning

    The name within the configuration file must match the model repository name. Please look at the quickstart configuration file and modify it the goal is to run another model.

Build isaac_ros_centerpose

  1. Launch the Docker container using the run_dev.sh script:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  2. Install the prebuilt Debian package:

    sudo apt-get update
    
    sudo apt-get install -y ros-humble-isaac-ros-centerpose
    

Run Launch File

  1. Continuing inside the Docker container, convert the onnx model to a TensorRT engine plan:

    /usr/src/tensorrt/bin/trtexec --onnx=${ISAAC_ROS_WS}/isaac_ros_assets/models/centerpose_shoe/1/model.onnx --saveEngine=${ISAAC_ROS_WS}/isaac_ros_assets/models/centerpose_shoe/1/model.plan
    
  1. Continuing inside the Docker container, install the following dependencies:

    sudo apt-get update
    
    sudo apt-get install -y ros-humble-isaac-ros-examples
    
  2. Run the following launch file to spin up a demo of this package using the quickstart rosbag:

    ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=centerpose,centerpose_visualizer interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_centerpose/quickstart_interface_specs.json model_name:=centerpose_shoe model_repository_paths:=[${ISAAC_ROS_WS}/isaac_ros_assets/models]
    
  3. Open a second terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  4. Run the rosbag file to simulate an image stream:

    ros2 bag play -l ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_centerpose/quickstart.bag
    

Visualize Results

  1. Open a new terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
       ./scripts/run_dev.sh
    
  2. Visualize and validate the output of the package with rqt_image_view:

    ros2 run rqt_image_view rqt_image_view /centerpose/image_visualized
    

    After about 1 minute, your output should like this:

    RQT showing detected pose of shoes

Use Different Models

NGC has CenterPose class models that can detect other objects. Check them out here.

Troubleshooting

Isaac ROS Troubleshooting

For solutions to problems with Isaac ROS, see troubleshooting.

Deep Learning Troubleshooting

For solutions to problems with using DNN models, please check here.

API

Usage

Two launch files are provided to use this package. The first launch file launches isaac_ros_tensor_rt, whereas the other one uses isaac_ros_triton, along with the necessary components to perform encoding on images and decoding of the CenterPose network’s output.

Warning

For your specific application, these launch files may need to be modified. Please consult the available components to see the configurable parameters.

Launch File

Components Used

isaac_ros_centerpose_tensor_rt.launch.py

DnnImageEncoderNode, TensorRTNode, CenterPoseDecoderNode, CenterPoseVisualizerNode

isaac_ros_centerpose_triton.launch.py

DnnImageEncoderNode, TritonNode, CenterPoseDecoderNode, CenterPoseVisualizerNode

CenterPoseDecoderNode

ROS Parameters

ROS Parameter

Type

Default

Description

output_field_size

int list

[128, 128]

An array of two integers that represent the size of the 2D keypoint decoding from the network output.

cuboid_scaling_factor

float

1.0

This parameter is used to scale the cuboid used for calculating the size of the objects detected.

score_threshold

float

0.3

The threshold for scores values to discard. Any score less than this value will be discarded.

object_name

string

shoe

The name of the category instance / object detected.

ROS Topics Subscribed

ROS Topic

Interface

Description

tensor_sub

isaac_ros_tensor_list_interfaces/TensorList

The TensorList that contains the outputs of the CenterPose network.

camera_info

sensor_msgs/CameraInfo

The CameraInfo of the original, undistorted image.

ROS Topics Published

ROS Topic

Interface

Description

centerpose/detections

vision_msgs/Detection3DArray

A Detection3DArray representing the poses of objects detected by the CenterPose network and interpreted by the CenterPose decoder node.

CenterPoseVisualizerNode

ROS Parameters

ROS Parameter

Type

Default

Description

show_axes

bool

true

Whether to draw the axes of the detected pose or not.

bounding_box_color

int32_t

0x000000ff

The color of the bounding box drawn. Only the last 24 bits are used to draw the color.

ROS Topics Subscribed

ROS Topic

Interface

Description

image

sensor_msgs/Image

The original, undistorted image.

camera_info

sensor_msgs/CameraInfo

The CameraInfo of the original, undistorted image.

centerpose/detections

vision_msgs/Detection3DArray

The detections made by the CenterPose decoder node.

ROS Topics Published

ROS Topic

Interface

Description

centerpose/image_visualized

sensor_msgs/Image

An image containing the detection’s bounding box reprojected onto the image and, optionally, the axes of the detected objects.