Tutorial for DOPE Inference with Triton
Overview
This tutorial walks you through a graph to estimate the 6DOF pose of a target object using DOPE using different backends. It uses input monocular images from a rosbag. The different backends show are:
PyTorch and ONNX
TensorRT Plan files with Triton
PyTorch model with Triton
Note
The DOPE converter script only works on x86_64
, so the
resultant onnx
model following these steps must be copied to the
Jetson.
Tutorial Walkthrough
Complete until
Run Launch File
of the quickstart here.Make a directory called
dope_ketchup
inside${ISAAC_ROS_WS}/isaac_ros_assets/models
, which will serve as the model repository. This will be versioned as1
. The downloaded model will be placed here:mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope_ketchup/1 && \ cp ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/Ketchup.pth ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope_ketchup/
Now select a backend. The PyTorch and ONNX options MUST be run on
x86_64
:To run ONNX models with Triton, export the model into an ONNX file using the
dope_converter
script provided:ros2 run isaac_ros_dope dope_converter.py --format onnx \ --input ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope_ketchup/Ketchup.pth --output ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope_ketchup/1/model.onnx \ --input_name INPUT__0 --output_name OUTPUT__0 --row 720 --col 1280
To run
TensorRT Plan
files with Triton, first copy the generatedonnx
model from the above point to the target platform (e.g. a Jetson or anx86_64
machine). The model will be assumed to be copied to${ISAAC_ROS_WS}/isaac_ros_assets/models/dope_ketchup/1/model.onnx
inside the Docker container. Then usetrtexec
to convert theonnx
model to aplan
model:/usr/src/tensorrt/bin/trtexec --onnx=${ISAAC_ROS_WS}/isaac_ros_assets/models/dope_ketchup/1/model.onnx --saveEngine=${ISAAC_ROS_WS}/isaac_ros_assets/models/dope_ketchup/1/model.plan
To run PyTorch model with Triton (inferencing PyTorch model is supported for x86_64 platform only), the model needs to be saved using
torch.jit.save()
. The downloaded DOPE model is saved withtorch.save()
. Export the DOPE model using thedope_converter
script:ros2 run isaac_ros_dope dope_converter.py --format pytorch \ --input ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope_ketchup/Ketchup.pth --output ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope_ketchup/1/model.pt --row 720 --col 1280
Create a configuration file for this model at path
${ISAAC_ROS_WS}/isaac_ros_assets/models/dope_ketchup/config.pbtxt
. Note that name has to be the same as the model repository. Depending on the platform selected from a previous step, a slightly differentconfig.pbtxt
file must be created:onnxruntime_onnx
(.onnx
file),tensorrt_plan
(.plan
file) orpytorch_libtorch
(.pt
file):name: "dope_ketchup" platform: <insert-platform> max_batch_size: 0 input [ { name: "INPUT__0" data_type: TYPE_FP32 dims: [ 1, 3, 720, 1280 ] } ] output [ { name: "OUTPUT__0" data_type: TYPE_FP32 dims: [ 1, 25, 90, 160 ] } ] version_policy: { specific { versions: [ 1 ] } }
The
<insert-platform>
part should be replaced withonnxruntime_onnx
for.onnx
files,tensorrt_plan
for.plan
files andpytorch_libtorch
for.pt
files.Note
The DOPE decoder currently works with the output of a DOPE network that has a fixed input size of 640 x 480, which are the default dimensions set in the script. In order to use input images of other sizes, make sure to crop or resize using ROS 2 nodes from Isaac ROS Image Pipeline or similar packages. If another image resolution is desired, please see here.
Note
- The model name must be
model.<selected-platform-extension>
.
Start
isaac_ros_dope
using the launch file:ros2 launch isaac_ros_dope isaac_ros_dope_triton.launch.py model_name:=dope_ketchup model_repository_paths:=[${ISAAC_ROS_WS}/isaac_ros_assets/models] input_binding_names:=['INPUT__0'] output_binding_names:=['OUTPUT__0'] object_name:=Ketchup
Note
object_name
should correspond to one of the objects listed in the DOPE configuration file, and the specified model should be a DOPE model that is trained for that specific object.Open another terminal, and enter the Docker container again:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Then, play the rosbag:
ros2 bag play -l ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_dope/quickstart.bag
Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through
ros2 topic echo
:In a third terminal, enter the Docker container again:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
ros2 topic echo /detections
Note
We are echoing
/detections
because we remapped the original topic/dope/detections
todetections
in the launch file.Now visualize the detections array in RViz2:
rviz2
Make sure to update the
Fixed Frame
totf_camera
. Then click on theAdd
button, selectBy display type
and chooseDetection3DArray
undervision_msgs_rviz_plugins
. Expand theDetection3DArray
display and change the topic to/detections
. Check theOnly Edge
option. Then click on theAdd
button again and selectBy Topic
. Under/dope_encoder
, expand the/resize
drop-down, select/image
, and click theCamera
option to see the image with the bounding box over detected objects. Refer to the pictures below.Note
For best results, crop/resize input images to the same dimensions your DNN model is expecting.