Using DOPE at Different Image Resolutions
Overview
The DOPE network architecture, as outlined in the original paper, can receive input images of arbitrary size and subsequently produce output belief maps of the corresponding dimensions.
However, the ONNX format used to run this network on Triton or TensorRT
is not as flexible, and an ONNX-exported model does NOT support
arbitrary image sizes at inference time. Instead, the desired input
image dimensions must be explicitly specified when preparing the ONNX
file using the dope_converter.py
script, as referenced in the
quickstart.
Tutorial Walkthrough
Complete until
Run Launch File
of the quickstart here.Under the
Run Launch File
section, run thedope_converter.py
script with the two additional argumentsrow
andcol
specifying the desired input image size:ros2 run isaac_ros_dope dope_converter.py --format onnx \ --input ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/Ketchup.pth --output ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/Ketchup.onnx \ --row 1080 --col 1920
Continuing in the quickstart’s
Run Launch File
section, complete the data source-specific setup. Then, launch the data source-specificisaac_ros_examples.launch.py
file with two additional argumentsnetwork_image_height
andnetwork_image_width
specifying the desired input image size.For example:
Continue with the rest of the quickstart. You should now be able to detect poses in images of your desired size.