isaac_ros_tensor_rt

Source code on GitHub.

Quickstart

Note

This quickstart demonstrates setting up isaac_ros_tensor_rt. It is often used with an encoder and decoder node to perform pre-processing and post-processing respectively to form an application.

To use the packages in useful contexts, please refer here.

Set Up Development Environment

  1. Set up your development environment by following the instructions in getting started.

  2. Clone isaac_ros_common under ${ISAAC_ROS_WS}/src.

    cd ${ISAAC_ROS_WS}/src && \
       git clone -b release-3.2 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
    
  3. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.

    Note

    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.

Build isaac_ros_tensor_rt

  1. Launch the Docker container using the run_dev.sh script:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  2. Install the prebuilt Debian package:

    sudo apt-get update
    
    sudo apt-get install -y ros-humble-isaac-ros-tensor-rt
    

Troubleshooting

Isaac ROS Troubleshooting

For solutions to problems with Isaac ROS, please check here.

Deep Learning Troubleshooting

For solutions to problems with using DNN models, please check here.

API

Usage

This package contains a launch file that solely launches isaac_ros_tensor_rt.

Warning

For your specific application, these launch files may need to be modified. Please consult the available components to see the configurable parameters.

Additionally, for most applications, an encoder node for pre-processing your data source and decoder for post-processing the inference output is required.

Launch File

Components Used

isaac_ros_tensor_rt.launch.py

TensorRTNode

TensorRTNode

ROS Parameters

ROS Parameter

Type

Default

Description

model_file_path

string

model.onnx

The absolute path to your model file in the local file system (the model file must be .onnx) E.g. model.onnx

engine_file_path

string

/tmp/trt_engine.plan

The absolute path to either where you want your TensorRT engine plan to be generated (from your model file) or where your pre-generated engine plan file is located E.g. model.plan

force_engine_update

bool

true

If set to true, the node will always try to generate a TensorRT engine plan from your model file and needs to be set to false to use the pre-generated TensorRT engine plan

input_tensor_names

string list

['input_tensor']

A list of tensor names to be bound to specified input bindings names. Bindings occur in sequential order, so the first name here will be mapped to the first name in input_binding_names

input_binding_names

string list

['']

A list of input tensor binding names specified by model E.g. ['input_2:0']

input_tensor_formats

string list

['']

A list of input tensor NITROS formats. This should be given in sequential order E.g. ['nitros_tensor_list_nchw_rgb_f32']

output_tensor_names

string list

['output_tensor']

A list of tensor names to be bound to specified output binding names

output_binding_names

string list

['']

A list of tensor names to be bound to specified output binding names E.g. ['argmax_1']

output_tensor_formats

string list

['']

A list of input tensor NITROS formats. This should be given in sequential order E.g. [nitros_tensor_list_nchw_rgb_f32]

verbose

bool

true

If set to true, the node will enable verbose logging to console from the internal TensorRT execution

max_workspace_size

int64_t

67108864l

The size of the working space in bytes

max_batch_size

int32_t

1

The maximum possible batch size in case the first dimension is dynamic and used as the batch size

dla_core

int64_t

-1

The DLA Core to use. Fallback to GPU is always enabled. The default setting is GPU only

enable_fp16

bool

true

Enables building a TensorRT engine plan file which uses FP16 precision for inference. If this setting is false, the plan file will use FP32 precision

relaxed_dimension_check

bool

true

Ignores dimensions of 1 for the input-tensor dimension check

num_blocks

int

40

The number of pre-allocated memory output blocks, should not be less than 40.

ROS Topics Subscribed

ROS Topic

Type

Description

tensor_pub

isaac_ros_tensor_list_interfaces/TensorList

The input tensor stream

ROS Topics Published

ROS Topic

Type

Description

tensor_sub

isaac_ros_tensor_list_interfaces/TensorList

The tensor list of output tensors from the model inference