Tutorial for Obstacle Avoidance and Object Following Using cuMotion with Perception
Overview
This tutorial walks through the process of planning trajectories for a real robot. It leverages the cuMotion plugin for MoveIt 2 provided by Isaac ROS cuMotion on a Jetson AGX Orin. This tutorial demonstrates obstacle-aware planning using cuMotion in two different scenarios. The first moves the robot end effector between two predetermined poses, alternating between them while avoiding obstacles detected in the workspace. The second scenario goes further by having the robot end effector follow an object at a fixed offset distance, also while avoiding obstacles. For pose estimation, we provide the option to use either FoundationPose or Deep Object Pose Estimation (DOPE).
FoundationPose leverages a foundation model for object pose estimation and generalizes to objects not seen during training. It does, however, require either a segmentation mask or a 2D bounding box for the object. In this tutorial, RT-DETR is used to detect and return a 2D bounding box for the object of interest. If the SyntheticaDETR v1.0.0 model file is used, this object must be one of 27 household items for which the model was trained.
As an alternative, DOPE may be used to both detect the object and estimate the object’s pose, so long as a DOPE model has been trained for the object. In most scenarios, poses returned by DOPE are somewhat less accurate than those returned by FoundationPose, but DOPE incurs lower runtime overhead. On Jetson AGX Orin, DOPE runs at up to 5 FPS, while the combination of RT-DETR and FoundationPose more typically runs at approximately 1 FPS.
The pose-to-pose tutorial uses:
Stereo cameras for perception
Isaac ROS ESS for depth estimation
Isaac ROS Nvblox to produce a voxelized representation of the scene
The object-following tutorial uses the above and adds:
Isaac ROS RT-DETR to detect an object of interest
Isaac ROS FoundationPose to estimate the 6 DoF pose of the desired object to approach
Drop nodes to reduce the input frame rate for RT-DETR, in order to more closely match the achievable frame rate for FoundationPose and thus limit unnecessary system load.
Isaac ROS DOPE to detect an object of interest and estimate its 6 DoF pose
Drop nodes to reduce the input frame rate for DOPE, in order to limit GPU utilization and avoid starvation of cuMotion and nvblox.
For each tutorial, the following number of stereo cameras are supported:
Number of cameras per tutorial |
Hawk |
RealSense |
---|---|---|
Pose-to-pose |
1 |
2 |
Object following |
1 |
2 |
Note
Multiple cameras can help reduce occlusion and noise in the scene and therefore increase the quality and completeness of the 3D reconstruction used for collision avoidance.
While the object following tutorial with multiple cameras runs scene reconstruction for obstacle-aware planning on all cameras, object detection and pose estimation are only enabled on the camera with the lowest index.
Reflective or smooth, featureless surfaces in the environment may increase noise in the depth estimation.
Use of multiple cameras is recommended.
Testing has been done with configurations of 2 RealSense cameras and (separately) 1 Hawk camera. Configurations with multiple Hawk cameras have not been tested.
Mixing different types has not been tested but may work with modifications to the launch files.
Warning
The obstacle avoidance behavior demonstrated in this tutorial is not a safety function and does not comply with any national or international functional safety standards. When testing obstacle avoidance behavior, do not use human limbs or other living entities.
The examples use RViz or Foxglove for visualization. RViz is the default visualization tool when Isaac Manipulator is running on the same computer that is displaying the visualization. Foxglove is recommended when visualizing a reconstruction streamed from a remote machine.
Requirements
The reference workflows have been tested on Jetson AGX Orin (64 GB).
Ensure that you have the following available:
A Universal Robots manipulator. This tutorial was validated on a UR5e and UR10e.
A Hawk stereo camera or up to two RealSense cameras.
One of the objects that sdetr_grasp or DOPE was trained on. In this tutorial, we are going to use the Mac and Cheese Box for FoundationPose and the Tomato Soup Can for DOPE.
Note
If using FoundationPose, for the desired object, ensure that you have a mesh and a texture file available for it. To prepare an object, consult the FoundationPose documentation: isaac_ros_foundationpose.
Tutorial
Set Up Development Environment
Set up your development environment by following the instructions in getting started.
Complete the Hawk setup tutorial if using a Hawk stereo camera. Alternatively, complete the RealSense setup tutorial if using RealSense cameras.
It is recommended to use a PREEMPT_RT kernel on Jetson. Follow the PREEMPT_RT for Jetson guide outside of the container.
Clone
isaac_ros_common
under${ISAAC_ROS_WS}/src
.cd ${ISAAC_ROS_WS}/src && \ git clone -b release-3.2 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
Build Isaac ROS cuMotion
Clone
isaac_manipulator
under${ISAAC_ROS_WS}/src
.cd ${ISAAC_ROS_WS}/src && \ git clone --recursive -b release-3.2 https://github.com/NVIDIA-ISAAC-ROS/isaac_manipulator.git isaac_manipulator
Clone the Isaac ROS fork of
ros2_robotiq_gripper
andtylerjw/serial
under${ISAAC_ROS_WS}/src
.cd ${ISAAC_ROS_WS}/src && \ git clone --recursive https://github.com/NVIDIA-ISAAC-ROS/ros2_robotiq_gripper && \ git clone -b ros2 https://github.com/tylerjw/serial
Launch the Docker container using the
run_dev.sh
script:cd $ISAAC_ROS_WS && ./src/isaac_ros_common/scripts/run_dev.sh
There are two options for installing this tutorial, installation from Debian and installation from source.
Use
rosdep
to install the package’s dependencies:sudo apt-get update
rosdep update && \ rosdep install -i -r \ --from-paths ${ISAAC_ROS_WS}/src/isaac_manipulator/isaac_manipulator_bringup/ \ --rosdistro humble -y
Build and source the ROS workspace:
cd ${ISAAC_ROS_WS} colcon build --symlink-install --packages-up-to isaac_manipulator_bringup source install/setup.bash
Install
isaac_manipulator_bringup
and its dependencies:sudo apt-get update
sudo apt-get install -y ros-humble-isaac-manipulator-bringup
Source the ROS workspace:
source /opt/ros/humble/setup.bash
Set Up UR Robot
Set up the UR robot by following Setting up a UR robot.
Create a program for external control by following Installing a URCap.
Warning
Extraction of calibration information from the UR robot is required to ensure that the ROS
ur_robot_driver
is able to accurately compute the TCP pose for a given joint configuration.Save the IP address of the robot and substitute it for
<ROBOT_IP_ADDRESS>
in the instructions below.
Set Up Cameras for Robot
If using a Hawk stereo camera, connect your camera via GMSL. If using RealSense cameras, connect them via USB 3 (see available USB ports of the Jetson AGX Orin here).
Note
When connecting your RealSense cameras via USB, make sure to use a USB 3 port and cable. It is recommended to use cables shorter than 3 m to guarantee stable transmission.
Place your stereo cameras such that their field of view fully covers the workspace that the robot arm will be operating in.
Note
Camera Placement Guide: Object detection networks, as well as
nvblox
, which is used for collision avoidance, are affected by the placement of cameras with respect to the workspace. Here we provide guidelines for placing cameras to achieve the best results. Deviation from these guidelines will degrade the quality of object detection and 3D reconstruction for obstacle avoidance. We recommend the following:Distance: Place cameras approximately 1 m from the bounds of the workspace.
Pitch: Locate cameras above the workspace surface such that they pitch down. We recommend 20 degrees, however pitches within the range of [10-30] are acceptable.
Multiple Cameras: For multiple cameras (currently a maximum of two are supported) we recommend that they view the workspace from significantly different viewing directions. In particular, we recommend that the yaw difference between the two cameras’ viewing angles around the world z-axis is greater than 90 degrees. Greater yaw and increased viewpoint differences between the cameras reduce occlusion in the scene. They also increase the quality and completeness of the 3D reconstruction being used for collision avoidance.
Decide on a name for your setup and substitute it for
<SETUP_NAME>
in the instructions below.If your test setup includes multiple RealSense cameras identify their serial numbers. Copy the unspecified.yaml file to a new file named
<SETUP_NAME>.yaml
in the same folder and fill in the camera serial numbers.Follow Manipulator Camera Calibration instructions to calibrate the cameras with respect to the robot. The
calibration.launch.py
output from each calibration process will include a code snippet similar to the following example (values included for illustration only as they will differ for different setups):""" Static transform publisher acquired via MoveIt 2 hand-eye calibration """ """ EYE-TO-HAND: world -> camera """ from launch import LaunchDescription from launch_ros.actions import Node def generate_launch_description() -> LaunchDescription: nodes = [ Node( package="tf2_ros", executable="static_transform_publisher", output="log", arguments=[ "--frame-id", "world", "--child-frame-id", "camera", "--x", "1.77278", "--y", "0.939827", "--z", "-0.0753478", "--qx", "-0.128117", "--qy", "-0.0317539", "--qz", "0.955077", "--qw", "-0.265339", # "--roll", # "0.132507", # "--pitch", # "-0.229891", # "--yaw", # "-2.5843", ], ), ] return LaunchDescription(nodes)
In the static_transforms.launch.py file, duplicate the
hubble_test_bench
item in thecalibrations_dict
dictionary and rename its key to<SETUP_NAME>
.Update the transforms in the new
<SETUP_NAME>
item with the calibrated pose values found in the calibration step above.Note
If installing from Debian, modify the
static_transforms.launch.py
file found in/opt/ros/humble/share/isaac_manipulator_bringup/launch/include/
.Specifically, copy the calibrated values
--x tx
,--y ty
, and--z tz
to the"translation": [tx, ty, tz]
field, and the calibrated values--qx qx
,qy qy
,--qz qz
, and--qw qw
to the"rotation": [qx, qy, qz, qw]
field.For example, adding a setup named
hawk_example
with the calibrated values (for illustration only) from the previous point would look as follows:calibrations_dict = { 'hubble_test_bench': { 'world_to_hawk': { 'parent_frame': 'world', 'child_frame': 'hawk', 'translation': [-1.75433, -0.0887958, 0.419998], 'rotation': [-0.00447052, 0.138631, -0.0101076, 0.990282], # [qx, qy ,qz, qw] }, 'hawk_example': { 'world_to_hawk': { 'parent_frame': 'world', 'child_frame': 'hawk', 'translation': [1.77278, 0.939827, -0.0753478], 'rotation': [-0.128117 -0.0317539, 0.955077, -0.265339], # [qx, qy ,qz, qw] }, ...
Modify other appropriate values based on the tutorial:
Modify the
object_to_grasp_frame
transform in the<SETUP_NAME>
item of thecalibrations_dict
dictionary in static_transforms.launch.py, to a desired grasp pose relative to the detected object. Feel free to leave this as the default for simplicity.Update the
world_to_target_frame_1
andworld_to_target_frame_2
transforms in the<SETUP_NAME>
item of thecalibrations_dict
dictionary in static_transforms.launch.py, to be two distinct poses that are reachable by the robot.Set the workspace bounds for nvblox:
Copy hubble_test_bench.yaml to a new file named
<SETUP_NAME>.yaml
in the same folder and update the workspace bound corners.Rebuild and source the
isaac_manipulator_bringup
package inside the Docker container:cd ${ISAAC_ROS_WS} && \ colcon build --symlink-install --packages-select isaac_manipulator_bringup && \ source install/setup.bash
Copy hubble_test_bench.yaml to a new file named
<SETUP_NAME>.yaml
in/opt/ros/humble/share/isaac_manipulator_bringup/config/nvblox/workspace_bounds/
.Note
How to choose the workspace bounds:
The workspace bounds define the space that is mapped for obstacle-aware planning and must cover the full space that the robot arm will be operating in.
The workspace min and max corners are defined in the nvblox
global_frame
set here.The workspace bounds will be visualized when running the launch files (as a red bounding box). You can return to this step to adjust them after the system is running and producing visualizations.
A larger workspace will result in increased computational demands, which scale with the total number of voxels. The Isaac Manipulator reference workflows have been tested on Jetson AGX Orin with workspace volumes of up to 8 m3 with 1 cm3 voxels. If a larger workspace is desired, the voxel size may be increased proportionally while keeping the total number of voxels fixed. The trade-off is that larger voxels may increase the likelihood of planning failures or increase motion time in scenarios that require the robot to move in tight proximity to obstacles.
Set Up Perception Deep Learning Models
Note
The ESS model is only required when using a Hawk stereo camera.
If you are building from source, prepare the ESS model to run depth estimation:
ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula
For the object-following tutorial, complete the following steps:
Get the SyntheticaDETR model to run object detection:
Download a pre-trained SyntheticaDETR model:
mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr && \ cd ${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr && \ wget 'https://api.ngc.nvidia.com/v2/models/nvidia/isaac/synthetica_detr/versions/1.0.0_onnx/files/sdetr_grasp.onnx'
Convert the model (
.onnx
) to a TensorRT engine plan:/usr/src/tensorrt/bin/trtexec --onnx=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/sdetr_grasp.onnx --saveEngine=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/sdetr_grasp.plan
Note
Running this command can take up to 15 minutes on Jetson AGX Orin.
Get the FoundationPose models to run pose estimation:
Download the pre-trained FoundationPose models:
mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose && \ cd ${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose && \ wget 'https://api.ngc.nvidia.com/v2/models/nvidia/isaac/foundationpose/versions/1.0.0_onnx/files/refine_model.onnx' -O refine_model.onnx && \ wget 'https://api.ngc.nvidia.com/v2/models/nvidia/isaac/foundationpose/versions/1.0.0_onnx/files/score_model.onnx' -O score_model.onnx
Convert the ONNX models (
.onnx
) to TensorRT engine plans:Convert the refine model:
/usr/src/tensorrt/bin/trtexec --onnx=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/refine_model.onnx --saveEngine=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/refine_trt_engine.plan --minShapes=input1:1x160x160x6,input2:1x160x160x6 --optShapes=input1:1x160x160x6,input2:1x160x160x6 --maxShapes=input1:42x160x160x6,input2:42x160x160x6
Convert the score model:
/usr/src/tensorrt/bin/trtexec --onnx=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/score_model.onnx --saveEngine=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/score_trt_engine.plan --minShapes=input1:1x160x160x6,input2:1x160x160x6 --optShapes=input1:1x160x160x6,input2:1x160x160x6 --maxShapes=input1:252x160x160x6,input2:252x160x160x6
Note
Running this command can take up to 15 minutes on Jetson AGX Orin.
Download the Mac and Cheese texture and mesh from:
Make sure required libraries are installed.
sudo apt-get install -y curl jq tar
Then, run these commands to download the asset from NGC:
NGC_ORG="nvidia" NGC_TEAM="isaac" PACKAGE_NAME="isaac_ros_foundationpose" NGC_RESOURCE="isaac_ros_foundationpose_assets" NGC_FILENAME="quickstart.tar.gz" MAJOR_VERSION=3 MINOR_VERSION=2 VERSION_REQ_URL="https://catalog.ngc.nvidia.com/api/resources/versions?orgName=$NGC_ORG&teamName=$NGC_TEAM&name=$NGC_RESOURCE&isPublic=true&pageNumber=0&pageSize=100&sortOrder=CREATED_DATE_DESC" AVAILABLE_VERSIONS=$(curl -s \ -H "Accept: application/json" "$VERSION_REQ_URL") LATEST_VERSION_ID=$(echo $AVAILABLE_VERSIONS | jq -r " .recipeVersions[] | .versionId as \$v | \$v | select(test(\"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\")) | split(\".\") | {major: .[0]|tonumber, minor: .[1]|tonumber, patch: .[2]|tonumber} | select(.major == $MAJOR_VERSION and .minor <= $MINOR_VERSION) | \$v " | sort -V | tail -n 1 ) if [ -z "$LATEST_VERSION_ID" ]; then echo "No corresponding version found for Isaac ROS $MAJOR_VERSION.$MINOR_VERSION" echo "Found versions:" echo $AVAILABLE_VERSIONS | jq -r '.recipeVersions[].versionId' else mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets && \ FILE_REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/\ versions/$LATEST_VERSION_ID/files/$NGC_FILENAME" && \ curl -LO --request GET "${FILE_REQ_URL}" && \ tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets && \ rm ${NGC_FILENAME} fi
Get the DOPE model to run pose estimation:
Download the
soup_60.pth
DOPE model from the official DOPE GitHub repository’s model collection available here.Move this file to
${ISAAC_ROS_WS}/isaac_ros_assets/models/dope
.For example, if the model was downloaded to
~/Downloads
:mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/ && \ mv ~/Downloads/soup_60.pth ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope
Convert the PyTorch model (
.pth
) to a general ONNX model (.onnx
):Note
This step must be performed on
x86_64
. If you intend to run the model on a Jetson, you must first convert the model on anx86_64
system and then copy the output file to the corresponding location on the Jetson (${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.onnx
).ros2 run isaac_ros_dope dope_converter.py --format onnx --row 720 --col 1280 \ --input ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.pth --output ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.onnx
Run Launch Files and Deploy to Robot
Note
We recommend setting a ROS_DOMAIN_ID
via export ROS_DOMAIN_ID=<ID_NUMBER>
for every
new terminal where you run ROS commands, to avoid interference
with other computers on the same network (ROS Guide).
On the UR teach pendant, ensure that the robot’s remote program is loaded and that the robot is paused or stopped for safety purposes.
Launch the UR Robot Driver:
ros2 launch ur_robot_driver ur_control.launch.py ur_type:=<UR_TYPE> robot_ip:=<ROBOT_IP_ADDRESS> launch_rviz:=false
Note
The robot driver is needed to publish joint states and transforms to the robot and is required for the pipeline to work.
This tutorial was validated using
ur_type:=ur5e
andur_type:=ur10e
.Open a new terminal inside the Docker container:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Source the ROS workspace:
cd ${ISAAC_ROS_WS} && \ source install/setup.bash
Launch MoveIt with cuMotion for the UR robot:
ros2 launch isaac_ros_cumotion_examples ur.launch.py ur_type:=<UR_TYPE> robot_ip:=<ROBOT_IP_ADDRESS> launch_rviz:=true
Warning
To prevent potential collisions, ensure that all obstacles are either fully in view of the camera or are added manually to the MoveIt planning scene. In MoveIt’s RViz interface, obstacles may be added in the “Scene Objects” tab, as described in the
isaac_ros_cumotion_moveit
Quickstart.Open another terminal inside the Docker container:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Source the ROS workspace:
cd ${ISAAC_ROS_WS} && \ source install/setup.bash
Launch the desired Isaac Manipulator example:
Note
The
robot_file_name
corresponds to the robot parameter found here. This tutorial has been tested with UR5e and UR10e robots, using URDF and XRDF files that incorporate the Robotiq 2F-140 gripper. The files can be found in theshare
directory of theisaac_ros_cumotion_robot_description
package, with the following names:UR5e:
ur5e_robotiq_2f_140.urdf
andur5e_robotiq_2f_140.xrdf
UR10e:
ur10e_robotiq_2f_140.urdf``and ``ur10e_robotiq_2f_140.xrdf
Install the Hawk stereo camera:
sudo apt-get install -y ros-humble-isaac-ros-hawk
Launch the object following example:
ros2 launch isaac_manipulator_bringup cumotion_object_following.launch.py \ ess_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dnn_stereo_disparity/dnn_stereo_disparity_v4.1.0_onnx/ess.engine \ rtdetr_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/sdetr_grasp.plan \ refine_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/refine_trt_engine.plan \ score_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/score_trt_engine.plan \ mesh_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_foundationpose/Mac_and_cheese_0_1/Mac_and_cheese_0_1.obj \ texture_path:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_foundationpose/Mac_and_cheese_0_1/materials/textures/baked_mesh_tex0.png \ rtdetr_class_id:=22 \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/<XRDF_FILE_NAME>.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/<URDF_FILE_NAME>.urdf \ camera_type:=hawk setup:=<SETUP_NAME>
Note
The robot end effector will move to match the pose of the object at a fixed offset. If that pose would bring the robot into collision (e.g., with a table surface), cuMotion will report a planning failure and the robot will remain stationary until the object is repositioned.
Launch the object following example:
ros2 launch isaac_manipulator_bringup cumotion_object_following.launch.py \ rtdetr_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/sdetr_grasp.plan \ refine_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/refine_trt_engine.plan \ score_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/score_trt_engine.plan \ mesh_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_foundationpose/Mac_and_cheese_0_1/Mac_and_cheese_0_1.obj \ texture_path:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_foundationpose/Mac_and_cheese_0_1/materials/textures/baked_mesh_tex0.png \ rtdetr_class_id:=22 \ camera_type:=realsense num_cameras:=<NUMBER_OF_CAMERAS> setup:=<SETUP_NAME> \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/<XRDF_FILE_NAME>.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/<URDF_FILE_NAME>.urdf
Note
The robot end effector will move to match the pose of the object at a fixed offset. If that pose would bring the robot into collision (e.g., with a table surface), cuMotion will report a planning failure and the robot will remain stationary until the object is repositioned.
Install the Hawk stereo camera:
sudo apt-get install -y ros-humble-isaac-ros-hawk
Launch the object following example:
ros2 launch isaac_manipulator_bringup cumotion_object_following.launch.py \ pose_estimation_type:=dope \ ess_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dnn_stereo_disparity/dnn_stereo_disparity_v4.1.0_onnx/ess.engine \ dope_model_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.onnx \ dope_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.plan \ camera_type:=hawk setup:=<SETUP_NAME> \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/<XRDF_FILE_NAME>.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/<URDF_FILE_NAME>.urdf
Note
The robot end effector will move to match the pose of the object at a fixed offset. If that pose would bring the robot into collision (e.g., with a table surface), cuMotion will report a planning failure and the robot will remain stationary until the object is repositioned.
Select the
detections
topic in thePose Estimate
tab in RViz to see the DOPE pose estimation results in the 3D viewport.
Launch the object following example:
ros2 launch isaac_manipulator_bringup cumotion_object_following.launch.py \ pose_estimation_type:=dope \ dope_model_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.onnx \ dope_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.plan \ camera_type:=realsense num_cameras:=<NUMBER_OF_CAMERAS> setup:=<SETUP_NAME> \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/<XRDF_FILE_NAME>.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/<URDF_FILE_NAME>.urdf
Note
The robot end effector will move to match the pose of the object at a fixed offset. If that pose would bring the robot into collision (e.g., with a table surface), cuMotion will report a planning failure and the robot will remain stationary until the object is repositioned.
Select the
detections
topic in thePose Estimate
tab in RViz to see the DOPE pose estimation results in the 3D viewport.
Install the Hawk stereo camera:
sudo apt-get install -y ros-humble-isaac-ros-hawk
Launch the pose-to-pose example:
ros2 launch isaac_manipulator_bringup cumotion_nvblox_pose_to_pose.launch.py \ ess_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dnn_stereo_disparity/dnn_stereo_disparity_v4.1.0_onnx/ess.engine \ camera_type:=hawk setup:=<SETUP_NAME> \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/<XRDF_FILE_NAME>.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/<URDF_FILE_NAME>.urdf
Launch the pose-to-pose example:
ros2 launch isaac_manipulator_bringup cumotion_nvblox_pose_to_pose.launch.py \ camera_type:=realsense num_cameras:=<NUMBER_OF_CAMERAS> setup:=<SETUP_NAME> \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/<XRDF_FILE_NAME>.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/<URDF_FILE_NAME>.urdf
Note
To launch the example without robot arm control, that is, with perception only, you can add the
no_robot_mode:=True
flag to any of the launch commands above.On the UR teach pendant, press the play button to enable the robot.
For additional details about the launch files, consult the
documentation
for the isaac_manipulator_bringup
package.
Visualizing in Foxglove
To visualize a reconstruction streamed from a remote machine, for example a robot, we recommend using Foxglove.
When running Isaac Manipulator on Jetson AGX Orin, it is recommended to display the visualization on a remote machine instead of on the Jetson directly. This will free up compute resources for the Isaac Manipulator pipeline.
When visualizing from a remote machine over WiFi, bandwidth is limited and easily exceeded. Exceeding this bandwidth can lead to poor visualization results. For best results we recommend visualizing a limited number of topics, and to avoiding visualizing high-bandwidth topics for example images.
To visualize with Foxglove, complete the following steps on your remote visualization machine:
Complete the Foxglove setup guide.
For the RealSense pose-to-pose tutorial, see the realsense_foxglove.json Foxglove layout configuration as an example. When running with a Hawk camera, use the hawk_foxglove.json configuration.
Launch any Isaac Manipulator tutorial with the commands shown here and append the
run_foxglove:=True
flag.