isaac_ros_dope

Source code on GitHub.

Quickstart

Warning

Step 7 must be performed on x86_64. The resultant model should be copied over to the Jetson. Also note that the process of model preparation differs significantly from the other repositories.

  1. Set up your development environment by following the instructions here

  2. Clone isaac_ros_common and this repository under ${ISAAC_ROS_WS}/src.

    cd ${ISAAC_ROS_WS}/src
    
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
    
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_pose_estimation.git
    
  3. Pull down a ROS Bag of sample data:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_pose_estimation && \
      git lfs pull -X "" -I "resources/rosbags/"
    
  4. Launch the Docker container using the run_dev.sh script:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
      ./scripts/run_dev.sh
    
  5. Install this package’s dependencies.

sudo apt-get install -y ros-humble-isaac-ros-dope ros-humble-isaac-ros-tensor-rt ros-humble-isaac-ros-dnn-image-encoder
  1. Make a directory to place models (inside the Docker container):

    mkdir -p /tmp/models/
    
  2. Select a DOPE model by visiting the DOPE model collection available on the official DOPE GitHub repository here. The model is assumed to be downloaded to ~/Downloads outside the Docker container.

    This example will use Ketchup.pth, which should be downloaded into /tmp/models inside the Docker container:

    Note

    This should be run outside the Docker container

    On x86_64:

    cd ~/Downloads && \
    docker cp Ketchup.pth isaac_ros_dev-x86_64-container:/tmp/models
    
  3. Convert the PyTorch file into an ONNX file: Warning: this step must be performed on x86_64. The resultant model will be assumed to have been copied to the Jetson in the same output location (/tmp/models/Ketchup.onnx)

    python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/Ketchup.pth
    

    If you are planning on using Jetson, copy the generated .onnx model into the Jetson, and then copy it over into aarch64 Docker container.

    We will assume that you already performed the transfer of the model onto the Jetson in the directory ~/Downloads.

    Enter the Docker container in Jetson:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
      ./scripts/run_dev.sh
    

    Make a directory called /tmp/models in Jetson:

    mkdir -p /tmp/models
    

    Outside the container, copy the generated onnx model:

    cd ~/Downloads && \
    docker cp Ketchup.onnx isaac_ros_dev-aarch64-container:/tmp/models
    
  4. Run the following launch files to spin up a demo of this package:

    Launch isaac_ros_dope:

    ros2 launch isaac_ros_dope isaac_ros_dope_tensor_rt.launch.py model_file_path:=/tmp/models/Ketchup.onnx engine_file_path:=/tmp/models/Ketchup.plan
    

    Then open another terminal, and enter the Docker container again:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
      ./scripts/run_dev.sh
    

    Then, play the ROS bag:

    ros2 bag play -l src/isaac_ros_pose_estimation/resources/rosbags/dope_rosbag/
    
  5. Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through ros2 topic echo:

    In a third terminal, enter the Docker container again:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
      ./scripts/run_dev.sh
    
    ros2 topic echo /poses
    

    Note

    We are echoing /poses because we remapped the original topic /dope/pose_array to poses in the launch file.

    Now visualize the pose array in RViz2:

    rviz2
    

    Then click on the Add button, select By topic and choose PoseArray under /poses. Finally, change the display to show an axes by updating Shape to be Axes, as shown in the screenshot below. Make sure to update the Fixed Frame to tf_camera.

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_pose_estimation/isaac_ros_dope/dope_rviz2.png/

Note

For best results, crop or resize input images to the same dimensions your DNN model is expecting.

Try More Examples

To continue your exploration, check out the following suggested examples:

Use Different Models

Click here for more information on how to use NGC models.

Alternatively, consult the DOPE model repository to try other models.

Model Name

Use Case

DOPE

The DOPE model repository. This should be used if isaac_ros_dope is used

Troubleshooting

Isaac ROS Troubleshooting

For solutions to problems with Isaac ROS, please check here.

Deep Learning Troubleshooting

For solutions to problems with using DNN models, please check here.

API

Usage

Two launch files are provided to use this package. The first launch file launches isaac_ros_tensor_rt, whereas the other one uses isaac_ros_triton, along with the necessary components to perform encoding on images and decoding of the DOPE network’s output.

Warning

For your specific application, these launch files may need to be modified. Please consult the available components to see the configurable parameters.

Launch File

Components Used

isaac_ros_dope_tensor_rt.launch.py

DnnImageEncoderNode, TensorRTNode, DopeDecoderNode

isaac_ros_dope_triton.launch.py

DnnImageEncoderNode, TritonNode, DopeDecoderNode

Warning

There is also a config file that should be modified in isaac_ros_dope/config/dope_config.yaml.

DopeDecoderNode

ROS Parameters

ROS Parameter

Type

Default

Description

configuration_file

string

dope_config.yaml

The name of the configuration file to parse. Note: The node will look for that file name under isaac_ros_dope/config

object_name

string

Ketchup

The object class the DOPE network is detecting and the DOPE decoder is interpreting. This name should be listed in the configuration file along with its corresponding cuboid dimensions.

Configuration File

The DOPE configuration file, which can be found at isaac_ros_dope/config/dope_config.yaml may need to modified. Specifically, you will need to specify an object type in the DopeDecoderNode that is listed in the dope_config.yaml file, so the DOPE decoder node will pick the right parameters to transform the belief maps from the inference node to object poses. The dope_config.yaml file uses the camera intrinsics of RealSense by default - if you are using a different camera, you will need to modify the camera_matrix field with the new, scaled (640x480) camera intrinsics.

Note

The object_name should correspond to one of the objects listed in the DOPE configuration file, with the corresponding model used.

ROS Topics Subscribed

ROS Topic

Interface

Description

belief_map_array

isaac_ros_tensor_list_interfaces/TensorList

The tensor that represents the belief maps, which are outputs from the DOPE network.

ROS Topics Published

ROS Topic

Interface

Description

dope/pose_array

geometry_msgs/PoseArray

An array of poses of the objects detected by the DOPE network and interpreted by the DOPE decoder node.