isaac_ros_triton
Source code on GitHub.
Quickstart
Note
This quickstart demonstrates isaac_ros_triton
in an image segmentation application.
Therefore, this demo features an encoder and decoder node to perform
pre-processing and post-processing respectively. In reality, the
raw inference result is simply a tensor.
To use the packages in other useful contexts, please refer here.
Set up your development environment by following the instructions here.
Clone
isaac_ros_common
,isaac_ros_image_segmentation
, and this repository under${ISAAC_ROS_WS}/src
.cd ${ISAAC_ROS_WS}/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_segmentation.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
Launch the Docker container using the
run_dev.sh
script:cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Install this package’s dependencies, along with an additional package used for this quickstart.
sudo apt-get install -y ros-humble-isaac-ros-triton ros-humble-isaac-ros-unet
This example uses
PeopleSemSegNet ShuffleSeg
. Download the ETLT file and theint8
inference mode cache file:mkdir -p /tmp/models/peoplesemsegnet_shuffleseg/1 && \ cd /tmp/models/peoplesemsegnet_shuffleseg && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_shuffleseg_unet_v1.0/files/peoplesemsegnet_shuffleseg_etlt.etlt && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_shuffleseg_unet_v1.0/files/peoplesemsegnet_shuffleseg_cache.txt
Convert the ETLT file to a TensorRT plan file:
/opt/nvidia/tao/tao-converter -k tlt_encode -d 3,544,960 -p input_2:0,1x3x544x960,1x3x544x960,1x3x544x960 -t int8 -c peoplesemsegnet_shuffleseg_cache.txt -e /tmp/models/peoplesemsegnet_shuffleseg/1/model.plan -o argmax_1 peoplesemsegnet_shuffleseg_etlt.etlt
Create a file named
/tmp/models/peoplesemsegnet_shuffleseg/config.pbtxt
by copying the sample Triton config file:cp /workspaces/isaac_ros-dev/src/isaac_ros_dnn_inference/resources/peoplesemsegnet_shuffleseg_config.pbtxt /tmp/models/peoplesemsegnet_shuffleseg/config.pbtxt
Run the following launch files to spin up a demo of this package: Launch Triton:
ros2 launch isaac_ros_unet isaac_ros_unet_triton.launch.py model_name:=peoplesemsegnet_shuffleseg model_repository_paths:=['/tmp/models'] input_binding_names:=['input_2:0'] output_binding_names:=['argmax_1'] network_output_type:='argmax' input_image_width:=1200 input_image_height:=632
In another terminal, enter the Docker container:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Then, play the ROS bag from
isaac_ros_image_segmentation
:ros2 bag play -l src/isaac_ros_image_segmentation/resources/rosbags/unet_sample_data/
Visualize and validate the output of the package:
In a third terminal, enter the Docker container:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Then echo the inference result:
ros2 topic echo /tensor_sub
The expected result should look like this:
header: stamp: sec: <time> nanosec: <time> frame_id: <frame-id> tensors: - name: output_tensor shape: rank: 4 dims: - 1 - 544 - 960 - 1 data_type: 5 strides: - 2088960 - 3840 - 4 - 4 data: [...]
This result is not very useful friendly. It’s typically more desirable to see the network output after it’s decoded. The result of the entire image segmentation pipeline can be visualized by launching
rqt_image_view
:ros2 run rqt_image_view rqt_image_view
Then inside the
rqt_image_view
GUI, change the topic to/unet/colored_segmentation_mask
to view a colorized segmentation mask.
Note
A launch file called
isaac_ros_triton.launch.py
is provided in this package to launch only Triton.Warning
The Triton Inference node expects tensors as input and outputs tensors.
The node inference itself is generic; as long as the input data can be converted into a tensor, and the model that is used is correctly trained on said input data.
For example, Triton can be used with models that are trained on images, audio and more, but the necessary encoder and decoder node must be implemented.
Troubleshooting
Isaac ROS Troubleshooting
For solutions to problems with Isaac ROS, please check here.
Deep Learning Troubleshooting
For solutions to problems with using DNN models, please check here.
API
Usage
This package contains a launch file that solely launches isaac_ros_triton
.
Warning
For your specific application, these launch files may need to be modified. Please consult the available components to see the configurable parameters.
Additionally, for most applications, an encoder node for pre-processing your data source and decoder for post-processing the inference output is required.
Launch File |
Components Used |
---|---|
|
TritonNode
ROS Parameter |
Type |
Default |
Description |
---|---|---|---|
|
|
|
The absolute paths to your model repositories in your local file system (the structure should follow Triton requirements) E.g. |
|
|
|
The name of your model. Under |
|
|
|
The maximum batch size allowed for the model. It should align with the model configuration |
|
|
|
The number of requests the Triton server can take at a time. This should be set according to the tensor publisher frequency |
|
|
|
A list of tensor names to be bound to specified input bindings names. Bindings occur in sequential order, so the first name here will be mapped to the first name in input_binding_names |
|
|
|
A list of input tensor binding names specified by model E.g. |
|
|
|
A list of input tensor NITROS formats. This should be given in sequential order E.g. |
|
|
|
A list of tensor names to be bound to specified output binding names |
|
|
|
A list of tensor names to be bound to specified output binding names E.g. |
|
|
|
A list of input tensor NITROS formats. This should be given in sequential order E.g. |
ROS Topics Subscribed
ROS Topic |
Type |
Description |
---|---|---|
|
The input tensor stream |
ROS Topics Published
ROS Topic |
Type |
Description |
---|---|---|
|
The tensor list of output tensors from the model inference |