isaac_ros_triton

Source code on GitHub.

Quickstart

Note

This quickstart demonstrates setting up isaac_ros_triton. It is often used with an encoder and decoder node to perform pre-processing and post-processing respectively to form an application.

To use the packages in useful contexts, refer here.

Set Up Development Environment

  1. Set up your development environment by following the instructions in getting started.

  2. Clone isaac_ros_common under ${ISAAC_ROS_WS}/src.

    cd ${ISAAC_ROS_WS}/src && \
       git clone -b release-3.2 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
    
  3. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.

    Note

    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.

Build isaac_ros_triton

  1. Launch the Docker container using the run_dev.sh script:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  2. Install the prebuilt Debian package:

    sudo apt-get update
    
    sudo apt-get install -y ros-humble-isaac-ros-triton
    

Troubleshooting

Isaac ROS Troubleshooting

For solutions to problems with Isaac ROS, see here.

Deep Learning Troubleshooting

For solutions to problems with using DNN models, see here.

API

Usage

This package contains a launch file that solely launches isaac_ros_triton.

Note

For your specific application, these launch files may need to be modified. Consult the available components to see the configurable parameters.

Additionally, for most applications, an encoder node for pre-processing your data source and decoder for post-processing the inference output is required.

Launch File

Components Used

isaac_ros_triton.launch.py

TritonNode

TritonNode

ROS Parameters

ROS Parameter

Type

Default

Description

model_repository_paths

string list

['']

The absolute paths to your model repositories in your local file system (the structure should follow Triton requirements) E.g. ['/tmp/models']

model_name

string

''

The name of your model. Under model_repository_paths, there should be a directory with this name, and it should align with the model name in the model configuration under this directory E.g. peoplesemsegnet_shuffleseg

max_batch_size

uint16_t

8

The maximum batch size allowed for the model. It should align with the model configuration

num_concurrent_requests

uint16_t

10

The number of requests the Triton server can take at a time. This should be set according to the tensor publisher frequency

input_tensor_names

string list

['input_tensor']

A list of tensor names to be bound to specified input bindings names. Bindings occur in sequential order, so the first name here will be mapped to the first name in input_binding_names

input_binding_names

string list

['']

A list of input tensor binding names specified by model E.g. ['input_2:0']

input_tensor_formats

string list

['']

A list of input tensor NITROS formats. This should be given in sequential order E.g. ['nitros_tensor_list_nchw_rgb_f32']

output_tensor_names

string list

['output_tensor']

A list of tensor names to be bound to specified output binding names

output_binding_names

string list

['']

A list of tensor names to be bound to specified output binding names E.g. ['argmax_1']

output_tensor_formats

string list

['']

A list of input tensor NITROS formats. This should be given in sequential order E.g. [nitros_tensor_list_nchw_rgb_f32]

ROS Topics Subscribed

ROS Topic

Type

Description

tensor_pub

isaac_ros_tensor_list_interfaces/TensorList

The input tensor stream

ROS Topics Published

ROS Topic

Type

Description

tensor_sub

isaac_ros_tensor_list_interfaces/TensorList

The tensor list of output tensors from the model inference