Attention

As of June 30, 2025, the Isaac ROS Buildfarm for Isaac ROS 2.1 on Ubuntu 20.04 Focal is no longer supported.

Due to an isolated infrastructure event, all ROS 2 Humble Debian packages that were previously built for Ubuntu 20.04 are no longer available in the Isaac Apt Repository. All artifacts for Isaac ROS 3.0 and later are built and maintained with a more robust pipeline.

Users are encouraged to migrate to the latest version of Isaac ROS. The source code for Isaac ROS 2.1 continues to be available on the release-2.1 branches of the Isaac ROS GitHub repositories.

The original documentation for Isaac ROS 2.1 is preserved below.

Training your own DOPE model

Overview

The DOPE network architecture is intended to be trained on objects of a specific class, which means that using DOPE for pose estimation of a custom object class requires training a custom model for that class.

NVIDIA Isaac Sim offers a convenient workflow for training a custom DOPE model using synthetic data generation (SDG).

Tutorial Walkthrough

  1. Clone the Isaac Sim DOPE Training repository and follow the training instructions to prepare a custom DOPE model.

  2. Using the Isaac Sim DOPE inference script, test the custom DOPE model’s inference capability and ensure that the quality is acceptable for your use case.

  3. Follow steps 1-5 of the main DOPE quickstart.

  4. At step 6, move the prepared .pth model output from the Isaac Sim DOPE Training script into the /tmp/models path inside the Docker container. bash  docker cp custom_model.pth isaac_ros_dev-x86_64-container:/tmp/models

  5. At step 7, run the dope_converter.py script with the custom model:

    python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/custom_model.pth
    
  6. Proceed through steps 8-9.

  7. At step 10, launch the ROS 2 launch file with the custom model:

    ros2 launch isaac_ros_dope isaac_ros_dope_tensor_rt.launch.py model_file_path:=/tmp/models/custom_model.onnx engine_file_path:=/tmp/models/custom_model.plan
    
  8. Continue with the rest of the quickstart. You should now be able to detect poses of custom objects.