Using DOPE at Different Image Resolutions#
Overview#
The DOPE network architecture, as outlined in the original paper, can receive input images of arbitrary size and subsequently produce output belief maps of the corresponding dimensions.
However, the ONNX format used to run this network on Triton or TensorRT
is not as flexible, and an ONNX-exported model does NOT support
arbitrary image sizes at inference time. Instead, the desired input
image dimensions must be explicitly specified when preparing the ONNX
file using the dope_converter.py script, as referenced in the
quickstart.
Tutorial Walkthrough#
Complete until
Run Launch Fileof the quickstart here.Under the
Run Launch Filesection, run thedope_converter.pyscript with the two additional argumentsrowandcolspecifying the desired input image size:ros2 run isaac_ros_dope dope_converter.py --format onnx \ --input ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/Ketchup.pth --output ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/Ketchup.onnx \ --row 1080 --col 1920
Continuing in the quickstart’s
Run Launch Filesection, complete the data source-specific setup. Then, launch the data source-specificisaac_ros_examples.launch.pyfile with two additional argumentsnetwork_image_heightandnetwork_image_widthspecifying the desired input image size.For example:
Continue with the rest of the quickstart. You should now be able to detect poses in images of your desired size.