Setup Hardware and Software for Real Robot with Isaac ROS Manipulation#
Overview#
This tutorial walks through the process of setting up the hardware and software for a real robot with Isaac ROS Manipulation.
Tutorial#
Set Up UR Robot#
Refer to the Set Up UR Robot section.
Set Up Cameras for Robot#
Refer to the Set Up Cameras for Robot section.
Note
Multiple cameras can help reduce occlusion and noise in the scene and therefore increase the quality and completeness of the 3D reconstruction used for collision avoidance.
While the Pick and Place workflow with multiple cameras runs scene reconstruction for obstacle-aware planning on all cameras, object detection and pose estimation are only enabled on the camera with the lowest index.
Reflective or smooth, featureless surfaces in the environment may increase noise in the depth estimation.
Use of multiple cameras is recommended.
Mixing stereo camera types is untested but may work with modifications to the launch files.
Warning
The obstacle avoidance behavior demonstrated in this tutorial is not a safety function and does not comply with any national or international functional safety standards. When testing obstacle avoidance behavior, do not use human limbs or other living entities.
Set Up Development Environment#
Set up your development environment by following the instructions in getting started.
(Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.
Note
We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the development environment during installation, which will interrupt the quickstart process.
Build Isaac ROS Manipulation Packages#
Activate the Isaac ROS environment:
isaac-ros activateInstall the prebuilt Debian package:
NVIDIA Internal: Run these commands to add the internal apt repository:
sudo apt install curl -y k="/usr/share/keyrings/nvidia-isaac-ros.gpg" curl -fsSL https://isaac.download.nvidia.com/isaac-ros/repos.key | sudo gpg --dearmor | sudo tee -a $k > /dev/null f="/etc/apt/sources.list.d/nvidia-isaac-ros.list" && sudo touch $f s="deb [signed-by=$k] https://urm.nvidia.com/artifactory/sw-isaac-staging-debian-local jammy release-3.3" grep -qxF "$s" $f || echo "$s" | sudo tee -a $f pin_content=$'package: *\nPin: origin isaac.download.nvidia.com\nPin-Priority: 400' echo "$pin_content" | sudo tee /etc/apt/preferences.d/isaac-ros sudo apt-get update
sudo apt-get install -y ros-humble-isaac-ros-manipulation-bringup
Clone this repository under
${ISAAC_ROS_WS}/src:cd ${ISAAC_ROS_WS}/src && git clone --recursive -b release-3.3 git@github.com:NVIDIA-ISAAC-ROS/validation-isaac_ros_manipulation.git isaac_ros_manipulation
Activate the Isaac ROS environment:
isaac-ros activateUse
rosdepto install the package’s dependencies:NVIDIA Internal: Run these commands to add the internal apt repository:
sudo apt install curl -y k="/usr/share/keyrings/nvidia-isaac-ros.gpg" curl -fsSL https://isaac.download.nvidia.com/isaac-ros/repos.key | sudo gpg --dearmor | sudo tee -a $k > /dev/null f="/etc/apt/sources.list.d/nvidia-isaac-ros.list" && sudo touch $f s="deb [signed-by=$k] https://urm.nvidia.com/artifactory/sw-isaac-staging-debian-local jammy release-3.3" grep -qxF "$s" $f || echo "$s" | sudo tee -a $f pin_content=$'package: *\nPin: origin isaac.download.nvidia.com\nPin-Priority: 400' echo "$pin_content" | sudo tee /etc/apt/preferences.d/isaac-ros sudo apt-get update
rosdep update && rosdep install --from-paths ${ISAAC_ROS_WS}/src/isaac_ros_manipulation/isaac_ros_manipulation_bringup --ignore-src -y
Accept NVIDIA model EULAs before building:
The build process will download perception models (ESS, FoundationStereo, FoundationPose, SyntheticaDETR, Grounding DINO) that require accepting NVIDIA’s End-User License Agreements (EULAs). Set the following environment variable to accept the terms:
export ISAAC_ROS_ACCEPT_EULA=1
Note
By setting this variable, you accept the terms and conditions of the EULAs for the perception models listed above. These models are distributed on the NVIDIA NGC Catalog under NVIDIA’s standard model licenses.
Build the package from source:
cd ${ISAAC_ROS_WS} export MANIPULATOR_INSTALL_ASSETS=1 export FOUNDATIONSTEREO_MODEL_RES=low_res colcon build --symlink-install --packages-up-to isaac_ros_manipulation_bringup
Note
Setting
FOUNDATIONSTEREO_MODEL_RES=low_resis recommended because the build process installs FoundationStereo models. The defaulthigh_resmodel requires 16 GB of GPU memory during TensorRT conversion, whilelow_resrequires 8 GB.Source the ROS workspace:
Note
Make sure to repeat this step in every terminal created inside the Isaac ROS environment.
Because this package was built from source, the enclosing workspace must be sourced for ROS to be able to find the package’s contents.
source install/setup.bash
Set Up Perception Deep Learning Models#
Prepare the ESS model to run depth estimation:
ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula
The FoundationStereo model is also available for stereo depth estimation. Follow the documentation in FoundationStereo to set up the model in case of any issues.
export FOUNDATIONSTEREO_MODEL_RES=low_res ros2 run isaac_ros_foundationstereo_models_install install_foundationstereo_models.sh --eula \ --model_res low_res
Set up the FoundationPose model. Follow the documentation in FoundationPose to set up the model in case of any issues.
ros2 run isaac_ros_foundationpose_models_install install_foundationpose_models.sh --eula
Set up the SyntheticaDETR model. Follow the documentation in SyntheticaDETR to set up the model in case of any issues.
ros2 run isaac_ros_rtdetr_models_install install_rtdetr_models.sh --eula
Set up the Grounding DINO model. For troubleshooting, refer to the isaac_ros_grounding_dino package documentation.
ros2 run isaac_ros_grounding_dino_models_install install_grounding_dino_models.sh --eula
Prepare the Segment Anything (SAM) ONNX model.
SAM requires converting the PyTorch weights to ONNX format. This conversion can only be performed on x86 machines. If you intend to run on Jetson, you must first perform the conversion on an x86 machine and then copy the generated ONNX files to the Jetson device.
On an x86 machine, install the
segment_anythingpackage viapip:pip install --no-deps --break-system-packages git+https://github.com/facebookresearch/segment-anything.git
Follow the conversion instructions in the Prepare Segment Anything ONNX Model section of the Segment Anything documentation.
If running on Jetson, copy the generated ONNX model files from the x86 machine to the corresponding location on the Jetson device.
Prepare the Segment Anything 2 (SAM2) ONNX model.
SAM2 also requires converting the PyTorch weights to ONNX format. This conversion can only be performed on x86 machines. If you intend to run on Jetson, you must first perform the conversion on an x86 machine and then copy the generated ONNX files to the Jetson device.
Follow the conversion instructions in the Prepare Segment Anything2 ONNX Model section of the Segment Anything 2 documentation.
If running on Jetson, copy the generated ONNX model files from the x86 machine to the corresponding location on the Jetson device.
To setup perception models like FoundationPose, SyntheticaDETR and download sample object assets, run the following command. This command will also give users a final verification that all the models are installed correctly:
export MANIPULATOR_INSTALL_ASSETS=1 export FOUNDATIONSTEREO_MODEL_RES=low_res ros2 run isaac_ros_manipulation_asset_bringup setup_perception_models.py --models all
export MANIPULATOR_INSTALL_ASSETS=1 export FOUNDATIONSTEREO_MODEL_RES=low_res colcon build --packages-up-to isaac_ros_manipulation_asset_bringup
Note
Running this command can take up to 15 minutes on Jetson AGX Orin. NVLabs has provided a DOPE pre-trained model using the HOPE dataset. To train your own DOPE model, see here.
Note
For details on which models are downloaded and how to set up specific models individually, refer to the isaac ros manipulation asset bringup package documentation.
Warning
If you encounter the error
ERROR: segment_anything package not found, ensure you have installed thesegment_anythingpackage on your x86 machine as described in the SAM model preparation step above.As a sanity check, run this command to verify that the assets are set up correctly.
ros2 run isaac_ros_manipulation_asset_bringup setup_perception_models.py --models all
You should see the following output:
INFO: === Setting up FoundationPose assets === INFO: Mac and Cheese assets already exist at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/isaac_ros_foundationpose/Mac_and_cheese_0_1 - Skipping download INFO: === Setting up DOPE model === INFO: DOPE model setup completed successfully INFO: === Setting up Segment Anything assets === INFO: Segment Anything assets already exist at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/isaac_ros_segment_anything - Skipping download INFO: === Setting up SAM model === INFO: SAM model already exists at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/isaac_ros_segment_anything/vit_b.pth - Skipping download INFO: ONNX model already exists at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/models/segment_anything/1/model.onnx - Skipping conversion INFO: Config file already exists at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/models/segment_anything/config.pbtxt - Skipping copy INFO: SAM model setup completed successfully INFO: Setup results: INFO: All requested models were set up successfully!
Build Robotiq Gripper Dependencies#
Clone the Isaac ROS fork of
ros2_robotiq_gripperandtylerjw/serialunder${ISAAC_ROS_WS}/src:cd ${ISAAC_ROS_WS}/src && \ git clone --recursive https://github.com/NVIDIA-ISAAC-ROS/ros2_robotiq_gripper && \ git clone -b ros2 https://github.com/tylerjw/serial
Use
rosdepto install the package’s dependencies:rosdep update && rosdep install --from-paths ${ISAAC_ROS_WS}/src/ros2_robotiq_gripper ${ISAAC_ROS_WS}/src/serial --ignore-src -y
Build the gripper dependencies:
cd ${ISAAC_ROS_WS} colcon build --symlink-install --packages-select-regex robotiq* serial --cmake-args "-DBUILD_TESTING=OFF" && \ source install/setup.bash # Source the workspace after building gripper dependencies
Configure Robotiq Gripper#
Before running the pick and place workflow, make sure to follow the instructions in setting up the
URrobot and the gripper.You will have to tweak the Tool I/O settings to
Usermode as shown below- especially if you see this error message:Failed to communicate with the Robotiq gripper
Note
If there are any issues with communication of the robot with the Jetson unit, refer to this section. Run the Driver and Hardware Tests to make sure your robot drivers are in a good state.
Install Python Dependencies#
Install RSL-RL:
sudo apt-get install -y python3-git \
&& pip install --break-system-packages tensordict \
&& pip install --break-system-packages --no-deps rsl-rl-lib==3.1.1
Run Pre-Flight Tests on Real Robot (Optional but highly recommended)#
To verify that you have setup the Isaac ROS Manipulation correctly, run the following command:
export ENABLE_MANIPULATOR_TESTING=on_robot
export ISAAC_ROS_MANIPULATION_TEST_CONFIG=<config_file_path_for_your_robot>
python -m pytest ${ISAAC_ROS_WS}/src/isaac_ros_manipulation/isaac_ros_manipulation_bringup/test
Note
You can also use colcon test --packages-select isaac_ros_manipulation_bringup to run the tests.
However, pytest has a better output and makes it easier to view status and progress of the tests.
If these tests fail, refer to Isaac ROS Manipulation Testing Guide for more information.
It is recommended to run the tests manually using launch_test and then manually inspecting the results.
This will run a series of tests to verify that the Isaac ROS Manipulation is working correctly.