=================================================================================== Tutorial for Obstacle Avoidance and Object Following Using cuMotion with Perception =================================================================================== Overview -------- This tutorial walks through the process of planning trajectories for a real robot. It leverages the cuMotion plugin for MoveIt 2 provided by :ir_repo:`Isaac ROS cuMotion ` on a Jetson AGX Orin. This tutorial demonstrates obstacle-aware planning using cuMotion in two different scenarios. The first moves the robot end effector between two predetermined poses, alternating between them while avoiding obstacles detected in the workspace. The second scenario goes further by having the robot end effector follow an object at a fixed offset distance, also while avoiding obstacles. For pose estimation, we provide the option to use either FoundationPose or Deep Object Pose Estimation (DOPE). FoundationPose leverages a foundation model for object pose estimation and generalizes to objects not seen during training. It does, however, require either a segmentation mask or a 2D bounding box for the object. In this tutorial, RT-DETR is used to detect and return a 2D bounding box for the object of interest. If the SyntheticaDETR v1.0.0 model file is used, this object must be one of 27 household items for which the model was trained. As an alternative, DOPE may be used to both detect the object and estimate the object's pose, so long as a DOPE model has been trained for the object. In most scenarios, poses returned by DOPE are somewhat less accurate than those returned by FoundationPose, but DOPE incurs lower runtime overhead. On Jetson AGX Orin, DOPE runs at up to 5 FPS, while the combination of RT-DETR and FoundationPose more typically runs at approximately 1 FPS. .. figure:: :ir_lfs:`` :width: 100% :align: center High-level architecture of the Isaac Manipulator "object following" reference workflow The pose-to-pose tutorial uses: - Stereo cameras for perception - :ir_repo:`Isaac ROS ESS ` for depth estimation - :ir_repo:`Isaac ROS Nvblox ` to produce a voxelized representation of the scene The object-following tutorial uses the above and adds: .. tabs:: .. tab:: FoundationPose - :ir_repo:`Isaac ROS RT-DETR ` to detect an object of interest - :ir_repo:`Isaac ROS FoundationPose ` to estimate the 6 DoF pose of the desired object to approach - `Drop `_ nodes to reduce the input frame rate for RT-DETR, in order to more closely match the achievable frame rate for FoundationPose and thus limit unnecessary system load. .. tab:: DOPE - :ir_repo:`Isaac ROS DOPE ` to detect an object of interest and estimate its 6 DoF pose - `Drop `_ nodes to reduce the input frame rate for DOPE, in order to limit GPU utilization and avoid starvation of cuMotion and nvblox. For each tutorial, the following number of stereo cameras are supported: .. list-table:: :header-rows: 1 * - Number of Cameras per Tutorial - Hawk - RealSense * - Pose-to-pose - 1 - 2 * - Object following - 1 - 2 Use of multiple cameras is recommended. Multiple cameras can help reduce occlusion and noise in the scene and therefore increase the quality and completeness of the 3D reconstruction used for collision avoidance. The object following tutorial with multiple cameras runs scene reconstruction for obstacle-aware planning on all cameras. Testing has been done with configurations of 2 RealSense cameras and (separately) 1 Hawk camera. Configurations with multiple Hawk cameras have not been tested. Object detection and pose estimation are only enabled on the camera with the lowest index. Reflective or smooth, featureless surfaces in the environment may increase noise in the depth estimation. Mixing different types has not been tested but may work with modifications to the launch files. .. warning:: The obstacle avoidance behavior demonstrated in this tutorial is not a safety function and does not comply with any national or international functional safety standards. When testing obstacle avoidance behavior, do not use human limbs or other living entities. The examples use `RViz `__ or `Foxglove `__ for visualization. RViz is the default visualization tool when Isaac Manipulator is running on the same computer that is displaying the visualization. Foxglove is recommended when visualizing a reconstruction streamed from a remote machine. Requirements ------------ The reference workflows have been tested on Jetson AGX Orin (64 GB). Ensure that you have the following available: - A `Universal Robots `__ manipulator. This tutorial was validated on a UR5e and UR10e. - A `Hawk stereo camera `__ or up to two `RealSense cameras `__. - One of the objects that :ir_ngc:`sdetr_grasp ` or `DOPE `__ was trained on. In this tutorial, we are going to use the Mac and Cheese Box for FoundationPose and the Tomato Soup Can for DOPE. .. figure:: :ir_lfs:`` :width: 234px :align: center :alt: Soup Can object .. figure:: :ir_lfs:`` :width: 286px :align: center :alt: Mac and Cheese object If you are using FoundationPose for an object, ensure that you have a mesh and a texture file available for it. To prepare an object, consult the FoundationPose documentation: :doc:`isaac_ros_foundationpose ` Tutorial -------- Set Up Development Environment ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. Set up your development environment by following the instructions in :doc:`getting started `. 2. Complete the :doc:`Hawk setup tutorial ` if using a Hawk stereo camera. Alternatively, complete the :doc:`RealSense setup tutorial ` if using RealSense cameras. 3. It is recommended to use a PREEMPT_RT kernel on Jetson. Follow the :doc:`PREEMPT_RT for Jetson guide ` **outside** of the container. 4. Clone ``isaac_ros_common`` under ``${ISAAC_ROS_WS}/src``. .. code:: bash cd ${ISAAC_ROS_WS}/src && \ git clone :ir_clone:`` Build Isaac ROS cuMotion ~~~~~~~~~~~~~~~~~~~~~~~~ #. Clone ``isaac_manipulator`` under ``${ISAAC_ROS_WS}/src``. .. code:: bash cd ${ISAAC_ROS_WS}/src && \ git clone --recursive :ir_clone:`` #. Clone the Isaac ROS fork of ``ros2_robotiq_gripper`` and ``tylerjw/serial`` under ``${ISAAC_ROS_WS}/src``. .. code:: bash cd ${ISAAC_ROS_WS}/src && \ git clone --recursive https://github.com/NVIDIA-ISAAC-ROS/ros2_robotiq_gripper && \ git clone -b ros2 https://github.com/tylerjw/serial * The fork of ``ros2_robotiq_gripper`` is required because it includes a fix for `Issue #57 `__ in the upstream repository. * The fork of ``tylerjw/serial`` is required because it includes a fix for `Issue #21 `__ in the upstream repository. #. Launch the Docker container using the ``run_dev.sh`` script: .. code:: bash cd $ISAAC_ROS_WS && ./src/isaac_ros_common/scripts/run_dev.sh #. There are two options for installing this tutorial, installation from Debian and installation from source. .. tabs:: .. tab:: Installation from Source #. Use ``rosdep`` to install the package’s dependencies: :ir_apt: .. code:: bash rosdep update && \ rosdep install -i -r \ --from-paths ${ISAAC_ROS_WS}/src/isaac_manipulator/isaac_manipulator_bringup/ \ --rosdistro humble -y #. Build and source the ROS workspace: .. code:: bash cd ${ISAAC_ROS_WS} colcon build --symlink-install --packages-up-to isaac_manipulator_bringup source install/setup.bash .. tab:: Installation from Debian #. Install ``isaac_manipulator_bringup`` and its dependencies: :ir_apt: .. code:: bash sudo apt-get install -y ros-humble-isaac-manipulator-bringup #. Source the ROS workspace: .. code:: bash source /opt/ros/humble/setup.bash .. _set-up-ur-robot: Set Up UR Robot ~~~~~~~~~~~~~~~ #. Set up the UR robot by following `Setting up a UR robot `__. #. Create a program for external control by following `Installing a URCap `__. .. warning:: `Extraction of calibration information `__ from the UR robot is required to ensure that the ROS ``ur_robot_driver`` is able to accurately compute the TCP pose for a given joint configuration. #. Save the IP address of the robot and substitute it for ```` in the instructions below. .. _set-up-cameras-for-robot: Set Up Cameras for Robot ~~~~~~~~~~~~~~~~~~~~~~~~ #. Connect your cameras: - If you are using a **Hawk** stereo camera, :ir_nova_docs:`connect your camera via GMSL `. - If you are using **RealSense** cameras, connect them via USB 3. See available USB ports of the Jetson AGX Orin `here `__. When connecting your RealSense cameras via USB, make sure to use a USB 3 port and cable. It is recommended to use cables shorter than 3 m to guarantee stable transmission. #. Place your stereo cameras such that their field of view fully covers the workspace that the robot arm will be operating in. **Camera Placement Guide:** Object detection networks, as well as ``nvblox``, which is used for collision avoidance, are affected by the placement of cameras with respect to the workspace. Here we provide guidelines for placing cameras to achieve the best results. *Deviation from these guidelines will degrade the quality of object detection and 3D reconstruction for obstacle avoidance.* We recommend the following: * **Distance:** Place cameras approximately 1 m from the bounds of the workspace. * **Pitch:** Locate cameras above the workspace surface such that they pitch down. We recommend 20 degrees, however pitches within the range of [10-30] are acceptable. * **Multiple Cameras:** For multiple cameras (currently a maximum of two are supported) we recommend that they view the workspace from significantly different viewing directions. In particular, we recommend that the yaw difference between the two cameras' viewing angles around the world z-axis is greater than 90 degrees. Greater yaw and increased viewpoint differences between the cameras reduce occlusion in the scene. They also increase the quality and completeness of the 3D reconstruction being used for collision avoidance. #. Decide on a name for your setup and substitute it for ```` in the instructions below. #. If your test setup includes multiple RealSense cameras `identify their serial numbers `__. Copy the :ir_repo:`unspecified.yaml ` file to a new file named ``.yaml`` in the same folder and fill in the camera serial numbers. #. Follow :doc:`Manipulator Camera Calibration ` instructions to calibrate the cameras with respect to the robot. The ``calibration.launch.py`` output from each calibration process will include a code snippet similar to the following example (values included for illustration only as they will differ for different setups): .. code:: bash """ Static transform publisher acquired via MoveIt 2 hand-eye calibration """ """ EYE-TO-HAND: world -> camera """ from launch import LaunchDescription from launch_ros.actions import Node def generate_launch_description() -> LaunchDescription: nodes = [ Node( package="tf2_ros", executable="static_transform_publisher", output="log", arguments=[ "--frame-id", "world", "--child-frame-id", "camera", "--x", "1.77278", "--y", "0.939827", "--z", "-0.0753478", "--qx", "-0.128117", "--qy", "-0.0317539", "--qz", "0.955077", "--qw", "-0.265339", # "--roll", # "0.132507", # "--pitch", # "-0.229891", # "--yaw", # "-2.5843", ], ), ] return LaunchDescription(nodes) #. In the :ir_repo:`static_transforms.launch.py ` file, duplicate the ``hubble_test_bench`` item in the ``calibrations_dict`` dictionary and rename its key to ````. #. Update the transforms in the new ```` item with the calibrated pose values found in the calibration step above. .. note:: If installing from Debian, modify the ``static_transforms.launch.py`` file found in ``/opt/ros/humble/share/isaac_manipulator_bringup/launch/include/``. Specifically, copy the calibrated values ``--x tx``, ``--y ty``, and ``--z tz`` to the ``"translation": [tx, ty, tz]`` field, and the calibrated values ``--qx qx``, ``qy qy``, ``--qz qz``, and ``--qw qw`` to the ``"rotation": [qx, qy, qz, qw]`` field. For example, adding a setup named ``hawk_example`` with the calibrated values (for illustration only) from the previous point would look as follows: .. code:: bash calibrations_dict = { 'hubble_test_bench': { 'world_to_hawk': { 'parent_frame': 'world', 'child_frame': 'hawk', 'translation': [-1.75433, -0.0887958, 0.419998], 'rotation': [-0.00447052, 0.138631, -0.0101076, 0.990282], # [qx, qy ,qz, qw] }, 'hawk_example': { 'world_to_hawk': { 'parent_frame': 'world', 'child_frame': 'hawk', 'translation': [1.77278, 0.939827, -0.0753478], 'rotation': [-0.128117 -0.0317539, 0.955077, -0.265339], # [qx, qy ,qz, qw] }, ... #. Modify other appropriate values based on the tutorial: .. tabs:: .. tab:: Object Following Modify the ``object_to_grasp_frame`` transform in the ```` item of the ``calibrations_dict`` dictionary in :ir_repo:`static_transforms.launch.py `, to a desired grasp pose relative to the detected object. Feel free to leave this as the default for simplicity. .. tab:: Pose to Pose Update the ``world_to_target_frame_1`` and ``world_to_target_frame_2`` transforms in the ```` item of the ``calibrations_dict`` dictionary in :ir_repo:`static_transforms.launch.py `, to be two distinct poses that are reachable by the robot. #. Set the workspace bounds for nvblox: .. tabs:: .. tab:: Installation from Source #. Copy :ir_repo:`hubble_test_bench.yaml ` to a new file named ``.yaml`` in the same folder and update the workspace bound corners. #. Rebuild and source the ``isaac_manipulator_bringup`` package inside the Docker container: .. code:: bash cd ${ISAAC_ROS_WS} && \ colcon build --symlink-install --packages-select isaac_manipulator_bringup && \ source install/setup.bash .. tab:: Installation from Debian Copy :ir_repo:`hubble_test_bench.yaml ` to a new file named ``.yaml`` in ``/opt/ros/humble/share/isaac_manipulator_bringup/config/nvblox/workspace_bounds/``. **How to choose the workspace bounds:** * The workspace bounds define the space that is mapped for obstacle-aware planning and must cover the full space that the robot arm will be operating in. * The workspace min and max corners are defined in the nvblox ``global_frame`` set :ir_repo:`here `. * The workspace bounds will be visualized when running the launch files (as a red bounding box). You can return to this step to adjust them after the system is running and producing visualizations. * A larger workspace will result in increased computational demands, which scale with the total number of voxels. The Isaac Manipulator reference workflows have been tested on Jetson AGX Orin with workspace volumes of up to 8 m\ :sup:`3` with 1 cm\ :sup:`3` voxels. If a larger workspace is desired, the voxel size may be increased proportionally while keeping the total number of voxels fixed. The trade-off is that larger voxels may increase the likelihood of planning failures or increase motion time in scenarios that require the robot to move in tight proximity to obstacles. .. _set-up-perception-deep-learning-models: Set Up Perception Deep Learning Models ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The `ESS `_ model is only required when using a Hawk stereo camera. **If you are building from source**, prepare the ESS model to run depth estimation: .. code:: ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula **For the object-following tutorial**, complete the following steps: .. tabs:: .. tab:: FoundationPose #. Get the SyntheticaDETR model to run object detection: #. Download a pre-trained :ir_ngc:`SyntheticaDETR ` model: .. code:: bash mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr && \ cd ${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr && \ wget 'https://api.ngc.nvidia.com/v2/models/nvidia/isaac/synthetica_detr/versions/1.0.0_onnx/files/sdetr_grasp.onnx' #. Convert the model (``.onnx``) to a TensorRT engine plan: .. code:: bash /usr/src/tensorrt/bin/trtexec --onnx=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/sdetr_grasp.onnx --saveEngine=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/sdetr_grasp.plan .. note:: Running this command can take up to 15 minutes on Jetson AGX Orin. #. Get the FoundationPose models to run pose estimation: #. Download the pre-trained :ir_ngc:`FoundationPose ` models: .. code:: bash mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose && \ cd ${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose && \ wget 'https://api.ngc.nvidia.com/v2/models/nvidia/isaac/foundationpose/versions/1.0.0_onnx/files/refine_model.onnx' -O refine_model.onnx && \ wget 'https://api.ngc.nvidia.com/v2/models/nvidia/isaac/foundationpose/versions/1.0.0_onnx/files/score_model.onnx' -O score_model.onnx #. Convert the ONNX models (``.onnx``) to TensorRT engine plans: Convert the refine model: .. code:: bash /usr/src/tensorrt/bin/trtexec --onnx=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/refine_model.onnx --saveEngine=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/refine_trt_engine.plan --minShapes=input1:1x160x160x6,input2:1x160x160x6 --optShapes=input1:1x160x160x6,input2:1x160x160x6 --maxShapes=input1:42x160x160x6,input2:42x160x160x6 Convert the score model: .. code:: bash /usr/src/tensorrt/bin/trtexec --onnx=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/score_model.onnx --saveEngine=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/score_trt_engine.plan --minShapes=input1:1x160x160x6,input2:1x160x160x6 --optShapes=input1:1x160x160x6,input2:1x160x160x6 --maxShapes=input1:252x160x160x6,input2:252x160x160x6 .. note:: Running this command can take up to 15 minutes on Jetson AGX Orin. #. Download the Mac and Cheese texture and mesh from: :ir_assets:` ` .. tab:: DOPE #. Get the DOPE model to run pose estimation: #. Download the ``soup_60.pth`` DOPE model from the official `DOPE GitHub `__ repository's model collection available `here `__. Move this file to ``${ISAAC_ROS_WS}/isaac_ros_assets/models/dope``. For example, if the model was downloaded to ``~/Downloads``: .. code:: bash mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/ && \ mv ~/Downloads/soup_60.pth ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope NVLabs has provided a DOPE pre-trained model using the `HOPE `__ dataset. To train your own DOPE model, see :doc:`here `. #. Convert the PyTorch model (``.pth``) to a general ONNX model (``.onnx``): .. note:: This step must be performed on ``x86_64``. If you intend to run the model on a Jetson, you must first convert the model on an ``x86_64`` system and then copy the output file to the corresponding location on the Jetson (``${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.onnx``). .. code:: bash ros2 run isaac_ros_dope dope_converter.py --format onnx --row 720 --col 1280 \ --input ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.pth --output ${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.onnx .. _run-launch-files: Run Launch Files and Deploy to Robot ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We recommend setting a ``ROS_DOMAIN_ID`` via ``export ROS_DOMAIN_ID=`` for every new terminal where you run ROS commands, to avoid interference with other computers on the same network (`ROS Guide `__). #. On the UR teach pendant, ensure that the robot's **remote program** is loaded and that the robot is **paused** or **stopped** for safety purposes. #. Launch the UR Robot Driver: .. code:: bash ros2 launch ur_robot_driver ur_control.launch.py ur_type:= robot_ip:= launch_rviz:=false The robot driver is needed to publish joint states and transforms to the robot and is required for the pipeline to work. This tutorial was validated using ``ur_type:=ur5e`` and ``ur_type:=ur10e``. #. Open a **new** terminal inside the Docker container: .. code:: bash cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh #. Source the ROS workspace: .. code:: bash cd ${ISAAC_ROS_WS} && \ source install/setup.bash #. Launch MoveIt with cuMotion for the UR robot: .. code:: bash ros2 launch isaac_ros_cumotion_examples ur.launch.py ur_type:= robot_ip:= launch_rviz:=true .. warning:: To prevent potential collisions, ensure that all obstacles are either fully in view of the camera or are added manually to the MoveIt planning scene. In MoveIt's RViz interface, obstacles may be added in the "Scene Objects" tab, as described in the ``isaac_ros_cumotion_moveit`` :doc:`Quickstart `. #. Open **another** terminal inside the Docker container: .. code:: bash cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh #. Source the ROS workspace: .. code:: bash cd ${ISAAC_ROS_WS} && \ source install/setup.bash #. Launch the desired Isaac Manipulator example: The ``robot_file_name`` corresponds to the robot parameter found :ref:`here `. This tutorial has been tested with UR5e and UR10e robots, using URDF and XRDF files that incorporate the Robotiq 2F-140 gripper. The files can be found in the ``share`` directory of the ``isaac_ros_cumotion_robot_description`` package, with the following names: * **UR5e**: ``ur5e_robotiq_2f_140.urdf`` and ``ur5e_robotiq_2f_140.xrdf`` * **UR10e**: ``ur10e_robotiq_2f_140.urdf``and ``ur10e_robotiq_2f_140.xrdf`` To launch the example without robot arm control, that is, with perception only, you can add the ``no_robot_mode:=True`` flag to any of the launch commands below. .. tabs:: .. tab:: Hawk stereo camera with object following, using FoundationPose #. Install the Hawk stereo camera: .. code:: bash sudo apt-get install -y ros-humble-isaac-ros-hawk #. Launch the object following example: .. code:: bash ros2 launch isaac_manipulator_bringup cumotion_object_following.launch.py \ ess_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dnn_stereo_disparity/dnn_stereo_disparity_v4.1.0_onnx/ess.engine \ rtdetr_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/sdetr_grasp.plan \ refine_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/refine_trt_engine.plan \ score_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/score_trt_engine.plan \ mesh_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_foundationpose/Mac_and_cheese_0_1/Mac_and_cheese_0_1.obj \ texture_path:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_foundationpose/Mac_and_cheese_0_1/materials/textures/baked_mesh_tex0.png \ rtdetr_class_id:=22 \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/.urdf \ camera_type:=hawk setup:= The robot end effector will move to match the pose of the object at a fixed offset. If that pose would bring the robot into collision (for example, with a table surface), cuMotion will report a planning failure and the robot will remain stationary until the object is repositioned. .. tab:: RealSense camera with object following, using FoundationPose #. Launch the object following example: .. code:: bash ros2 launch isaac_manipulator_bringup cumotion_object_following.launch.py \ rtdetr_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/sdetr_grasp.plan \ refine_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/refine_trt_engine.plan \ score_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/score_trt_engine.plan \ mesh_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_foundationpose/Mac_and_cheese_0_1/Mac_and_cheese_0_1.obj \ texture_path:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_foundationpose/Mac_and_cheese_0_1/materials/textures/baked_mesh_tex0.png \ rtdetr_class_id:=22 \ camera_type:=realsense num_cameras:= setup:= \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/.urdf The robot end effector will move to match the pose of the object at a fixed offset. If that pose would bring the robot into collision (for example, with a table surface), cuMotion will report a planning failure and the robot will remain stationary until the object is repositioned. .. tab:: Hawk stereo camera with object following. using DOPE #. Install the Hawk stereo camera: .. code:: bash sudo apt-get install -y ros-humble-isaac-ros-hawk #. Launch the object following example: .. code:: bash ros2 launch isaac_manipulator_bringup cumotion_object_following.launch.py \ pose_estimation_type:=dope \ ess_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dnn_stereo_disparity/dnn_stereo_disparity_v4.1.0_onnx/ess.engine \ dope_model_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.onnx \ dope_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.plan \ camera_type:=hawk setup:= \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/.urdf The robot end effector will move to match the pose of the object at a fixed offset. If that pose would bring the robot into collision (e.g., with a table surface), cuMotion will report a planning failure and the robot will remain stationary until the object is repositioned. #. Select the ``detections`` topic in the ``Pose Estimate`` tab in RViz to see the DOPE pose estimation results in the 3D viewport. .. tab:: RealSense camera with object following, using DOPE #. Launch the object following example: .. code:: bash ros2 launch isaac_manipulator_bringup cumotion_object_following.launch.py \ pose_estimation_type:=dope \ dope_model_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.onnx \ dope_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dope/soup_60.plan \ camera_type:=realsense num_cameras:= setup:= \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/.urdf The robot end effector will move to match the pose of the object at a fixed offset. If that pose would bring the robot into collision (for example, with a table surface), cuMotion will report a planning failure and the robot will remain stationary until the object is repositioned. #. Select the ``detections`` topic in the ``Pose Estimate`` tab in RViz to see the DOPE pose estimation results in the 3D viewport. .. tab:: Hawk stereo camera for pose-to-pose example #. Install the Hawk stereo camera: .. code:: bash sudo apt-get install -y ros-humble-isaac-ros-hawk #. Launch the pose-to-pose example: .. code:: bash ros2 launch isaac_manipulator_bringup cumotion_nvblox_pose_to_pose.launch.py \ ess_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/dnn_stereo_disparity/dnn_stereo_disparity_v4.1.0_onnx/ess.engine \ camera_type:=hawk setup:= \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/.urdf .. tab:: RealSense camera for pose-to-pose example #. Launch the pose-to-pose example: .. code:: bash ros2 launch isaac_manipulator_bringup cumotion_nvblox_pose_to_pose.launch.py \ camera_type:=realsense num_cameras:= setup:= \ robot_file_name:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/xrdf/.xrdf \ urdf_file_path:=$(ros2 pkg prefix --share isaac_ros_cumotion_robot_description)/urdf/.urdf #. On the UR teach pendant, press **play** to enable the robot. .. figure:: :ir_lfs:`` :width: 576px :align: center "Pose-to-pose" example with obstacle avoidance via cuMotion and nvblox on a UR5e robot. For additional details about the launch files, consult the :doc:`documentation ` for the ``isaac_manipulator_bringup`` package. Visualizing in Foxglove ~~~~~~~~~~~~~~~~~~~~~~~ To visualize a reconstruction streamed from a remote machine, for example a robot, we recommend using `Foxglove `__. * When running Isaac Manipulator on Jetson AGX Orin, it is recommended to display the visualization on a remote machine instead of on the Jetson directly. This will free up compute resources for the Isaac Manipulator pipeline. * When visualizing from a remote machine over WiFi, bandwidth is limited and easily exceeded. Exceeding this bandwidth can lead to poor visualization results. For best results we recommend visualizing a limited number of topics, and to avoiding visualizing high-bandwidth topics for example images. To visualize with Foxglove, complete the following steps on your remote visualization machine: #. Complete the :ref:`Foxglove setup ` guide. #. For the RealSense pose-to-pose tutorial, see the :ir_repo:`realsense_foxglove.json ` Foxglove layout configuration as an example. When running with a Hawk camera, use the :ir_repo:`hawk_foxglove.json ` configuration. #. Launch any Isaac Manipulator tutorial with the commands shown :ref:`here ` and append the ``run_foxglove:=True`` flag.