isaac_ros_dope
Source code on GitHub.
Quickstart
Warning
Step 7 must be performed on x86_64
. The resultant
model should be copied over to the Jetson
. Also note that the
process of model preparation differs significantly from the other
repositories.
Set up your development environment by following the instructions here
Clone
isaac_ros_common
and this repository under${ISAAC_ROS_WS}/src
.cd ${ISAAC_ROS_WS}/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_pose_estimation.git
Pull down a ROS Bag of sample data:
cd ${ISAAC_ROS_WS}/src/isaac_ros_pose_estimation && \ git lfs pull -X "" -I "resources/rosbags/"
Launch the Docker container using the
run_dev.sh
script:cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Install this package’s dependencies.
sudo apt-get install -y ros-humble-isaac-ros-dope ros-humble-isaac-ros-tensor-rt ros-humble-isaac-ros-dnn-image-encoder
Make a directory to place models (inside the Docker container):
mkdir -p /tmp/models/
Select a DOPE model by visiting the DOPE model collection available on the official DOPE GitHub repository here. The model is assumed to be downloaded to
~/Downloads
outside the Docker container.This example will use
Ketchup.pth
, which should be downloaded into/tmp/models
inside the Docker container:Note
This should be run outside the Docker container
On
x86_64
:cd ~/Downloads && \ docker cp Ketchup.pth isaac_ros_dev-x86_64-container:/tmp/models
Convert the PyTorch file into an ONNX file: Warning: this step must be performed on
x86_64
. The resultant model will be assumed to have been copied to theJetson
in the same output location (/tmp/models/Ketchup.onnx
)python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/Ketchup.pth
If you are planning on using Jetson, copy the generated
.onnx
model into the Jetson, and then copy it over intoaarch64
Docker container.We will assume that you already performed the transfer of the model onto the Jetson in the directory
~/Downloads
.Enter the Docker container in Jetson:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Make a directory called
/tmp/models
in Jetson:mkdir -p /tmp/models
Outside the container, copy the generated
onnx
model:cd ~/Downloads && \ docker cp Ketchup.onnx isaac_ros_dev-aarch64-container:/tmp/models
Run the following launch files to spin up a demo of this package:
Launch
isaac_ros_dope
:ros2 launch isaac_ros_dope isaac_ros_dope_tensor_rt.launch.py model_file_path:=/tmp/models/Ketchup.onnx engine_file_path:=/tmp/models/Ketchup.plan
Then open another terminal, and enter the Docker container again:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Then, play the ROS bag:
ros2 bag play -l src/isaac_ros_pose_estimation/resources/rosbags/dope_rosbag/
Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through
ros2 topic echo
:In a third terminal, enter the Docker container again:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
ros2 topic echo /poses
Note
We are echoing
/poses
because we remapped the original topic/dope/pose_array
toposes
in the launch file.Now visualize the pose array in RViz2:
rviz2
Then click on the
Add
button, selectBy topic
and choosePoseArray
under/poses
. Finally, change the display to show an axes by updatingShape
to beAxes
, as shown in the screenshot below. Make sure to update theFixed Frame
totf_camera
.
Note
For best results, crop or resize input images to the same dimensions your DNN model is expecting.
Try More Examples
To continue your exploration, check out the following suggested examples:
Use Different Models
Click here for more information on how to use NGC models.
Alternatively, consult the DOPE
model repository to try other models.
Model Name |
Use Case |
---|---|
The DOPE model repository. This should be used if |
Troubleshooting
Isaac ROS Troubleshooting
For solutions to problems with Isaac ROS, please check here.
Deep Learning Troubleshooting
For solutions to problems with using DNN models, please check here.
API
Usage
Two launch files are provided to use this package. The first launch file launches isaac_ros_tensor_rt
, whereas the other one uses isaac_ros_triton
, along with
the necessary components to perform encoding on images and decoding of the DOPE
network’s output.
Warning
For your specific application, these launch files may need to be modified. Please consult the available components to see the configurable parameters.
Launch File |
Components Used |
---|---|
|
|
|
Warning
There is also a config
file that should be modified in
isaac_ros_dope/config/dope_config.yaml
.
DopeDecoderNode
ROS Parameters
ROS Parameter |
Type |
Default |
Description |
---|---|---|---|
|
|
|
The name of the configuration file to parse. Note: The node will look for that file name under |
|
|
|
The object class the DOPE network is detecting and the DOPE decoder is interpreting. This name should be listed in the configuration file along with its corresponding cuboid dimensions. |
Configuration File
The DOPE configuration file, which can be found at isaac_ros_dope/config/dope_config.yaml
may need to modified. Specifically, you will need to specify an object type in the DopeDecoderNode
that is listed in the dope_config.yaml
file, so the DOPE decoder node will pick the right parameters to transform the belief maps from the inference node to object poses. The dope_config.yaml
file uses the camera intrinsics of RealSense by default - if you are using a different camera, you will need to modify the camera_matrix field with the new, scaled (640x480)
camera intrinsics.
Note
The object_name
should correspond to one of the objects listed in the DOPE configuration file, with the corresponding model used.
ROS Topics Subscribed
ROS Topic |
Interface |
Description |
---|---|---|
|
The tensor that represents the belief maps, which are outputs from the DOPE network. |
ROS Topics Published
ROS Topic |
Interface |
Description |
---|---|---|
|
An array of poses of the objects detected by the DOPE network and interpreted by the DOPE decoder node. |