Isaac ROS DNN Inference

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_dnn_peoplenet.jpg/ https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_dnn_inference_peoplesemsegnet.jpg/

Overview

Isaac ROS DNN Inference contains ROS 2 packages for performing DNN inference, providing AI-based perception for robotics applications. DNN inference uses a pre-trained DNN model to ingest an input Tensor and output a prediction to an output Tensor.

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_dnn_inference_nodegraph.png/

Above is a typical graph of nodes for DNN inference on image data. The input image is resized to match the input resolution of the DNN; the image resolution may be reduced to improve DNN inference performance ,which typically scales directly with the number of pixels in the image. DNN inference requires input Tensors, so a DNN encoder node is used to convert from an input image to Tensors, including any data pre-processing that is required for the DNN model. Once DNN inference is performed, the DNN decoder node is used to convert the output Tensors to results that can be used by the application.

TensorRT and Triton are two separate ROS nodes to perform DNN inference. The TensorRT node uses TensorRT to provide high-performance deep learning inference. TensorRT optimizes the DNN model for inference on the target hardware, including Jetson and discrete GPUs. It also supports specific operations that are commonly used by DNN models. For newer or bespoke DNN models, TensorRT may not support inference on the model. For these models, use the Triton node.

The Triton node uses the Triton Inference Server, which provides a compatible frontend supporting a combination of different inference backends (e.g. ONNX Runtime, TensorRT Engine Plan, TensorFlow, PyTorch). In-house benchmark results measure little difference between using TensorRT directly or configuring Triton to use TensorRT as a backend.

Some DNN models may require custom DNN encoders to convert the input data to the Tensor format needed for the model, and custom DNN decoders to convert from output Tensors into results that can be used in the application. Leverage the DNN encoder and DNN decoder node(s) for image bounding box detection and image segmentation, or your own custom node(s).

Note

DNN inference can be performed on different types of input data, including audio, video, text, and various sensor data, such as LIDAR, camera, and RADAR. This package provides implementations for DNN encode and DNN decode functions for images, which are commonly used for perception in robotics. The DNNs operate on Tensors for their input, output, and internal transformations, so the input image needs to be converted to a Tensor for DNN inferencing.

Quickstarts

Isaac ROS NITROS Acceleration

This package is powered by NVIDIA Isaac Transport for ROS (NITROS), which leverages type adaptation and negotiation to optimize message formats and dramatically accelerate communication between participating nodes.

DNN Models

To perform DNN inferencing a DNN model is required. NGC provides pre-trained models for use in your robotics application. Using TAO NGC pre-trained models can be fine-tuned for you application. Your own DNN models can be trained directly or download pre-trained models from one of the many model zoo’s available online for use with TensorRT and Triton ROS nodes.

For more details about the setup of TensorRT and Triton, look here.

Use Different Models

Click here for more information about how to use NGC models.

We also natively support the following packages to perform DNN inference in a variety of contexts:

Package Name

Use Case

DNN Stereo Depth

Deep learned stereo depth estimation

Image Segmentation

NVIDIA-accelerated, deep learned semantic image segmentation

Object Detection

Deep learning model support for object detection including DetectNet

Pose Estimation

Deep learned, NVIDIA-accelerated 3D object pose estimation

Depth Segmentation

DNN-based depth segmentation and obstacle field ranging using Bi3D

Packages

Supported Platforms

This package is designed and tested to be compatible with ROS 2 Humble running on Jetson or an x86_64 system with an NVIDIA GPU.

Note

Versions of ROS 2 other than Humble are not supported. This package depends on specific ROS 2 implementation features that were introduced beginning with the Humble release. ROS 2 versions after Humble have not yet been tested.

Platform

Hardware

Software

Notes

Jetson

Jetson Orin

JetPack 6.1

For best performance, ensure that power settings are configured appropriately.


Jetson Orin Nano 4GB may not have enough memory to run many of the Isaac ROS packages and is not recommended.

x86_64

Ampere or higher NVIDIA GPU Architecture with 8 GB RAM or higher

Ubuntu 22.04+

CUDA 12.6+

Docker

To simplify development, we strongly recommend leveraging the Isaac ROS Dev Docker images by following these steps. This streamlines your development environment setup with the correct versions of dependencies on both Jetson and x86_64 platforms.

Note

All Isaac ROS Quickstarts, tutorials, and examples have been designed with the Isaac ROS Docker images as a prerequisite.

Customize your Dev Environment

To customize your development environment, reference this guide.

Updates

Date

Changes

2024-12-10

Update to be compatible with JetPack 6.1

2024-09-26

Update for Isaac ROS 3.1

2024-05-30

Update to be compatible with JetPack 6.0

2023-10-18

Updated for Isaac ROS 2.0.0

2023-05-25

Performance improvements

2023-04-05

Source available GXF extensions

2022-10-19

Updated OSS licensing

2022-08-31

Update to be compatible with JetPack 5.0.2

2022-06-30

Added format string parameter in Triton/TensorRT, switched to NITROS implementation, removed parameters in DNN Image Encoder

2021-11-03

Split DOPE and U-Net into separate repositories

2021-10-20

Initial release