Setup Hardware and Software for Real Robot with Isaac ROS Manipulation#

Overview#

This tutorial walks through the process of setting up the hardware and software for a real robot with Isaac ROS Manipulation.

Tutorial#

Set Up UR Robot#

  1. Refer to the Set Up UR Robot section.

Set Up Cameras for Robot#

  1. Refer to the Set Up Cameras for Robot section.

Note

Multiple cameras can help reduce occlusion and noise in the scene and therefore increase the quality and completeness of the 3D reconstruction used for collision avoidance.

While the Pick and Place workflow with multiple cameras runs scene reconstruction for obstacle-aware planning on all cameras, object detection and pose estimation are only enabled on the camera with the lowest index.

Reflective or smooth, featureless surfaces in the environment may increase noise in the depth estimation.

Use of multiple cameras is recommended.

Mixing stereo camera types is untested but may work with modifications to the launch files.

Warning

The obstacle avoidance behavior demonstrated in this tutorial is not a safety function and does not comply with any national or international functional safety standards. When testing obstacle avoidance behavior, do not use human limbs or other living entities.

Set Up Development Environment#

  1. Set up your development environment by following the instructions in getting started.

  2. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.

    Note

    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the development environment during installation, which will interrupt the quickstart process.

Build Isaac ROS Manipulation Packages#

  1. Activate the Isaac ROS environment:

    isaac-ros activate
    
  2. Install the prebuilt Debian package:

    NVIDIA Internal: Run these commands to add the internal apt repository:

    sudo apt install curl -y
    k="/usr/share/keyrings/nvidia-isaac-ros.gpg"
    curl -fsSL https://isaac.download.nvidia.com/isaac-ros/repos.key | sudo gpg --dearmor | sudo tee -a $k > /dev/null
    f="/etc/apt/sources.list.d/nvidia-isaac-ros.list" && sudo touch $f
    s="deb [signed-by=$k] https://urm.nvidia.com/artifactory/sw-isaac-staging-debian-local jammy release-3.3"
    grep -qxF "$s" $f || echo "$s" | sudo tee -a $f
    
    pin_content=$'package: *\nPin: origin isaac.download.nvidia.com\nPin-Priority: 400'
    echo "$pin_content" | sudo tee /etc/apt/preferences.d/isaac-ros
    
    sudo apt-get update
    
    sudo apt-get install -y ros-humble-isaac-ros-manipulation-bringup
    
  1. Clone this repository under ${ISAAC_ROS_WS}/src:

    cd ${ISAAC_ROS_WS}/src && git clone --recursive -b release-3.3 git@github.com:NVIDIA-ISAAC-ROS/validation-isaac_ros_manipulation.git isaac_ros_manipulation
    
  2. Activate the Isaac ROS environment:

    isaac-ros activate
    
  3. Use rosdep to install the package’s dependencies:

    NVIDIA Internal: Run these commands to add the internal apt repository:

    sudo apt install curl -y
    k="/usr/share/keyrings/nvidia-isaac-ros.gpg"
    curl -fsSL https://isaac.download.nvidia.com/isaac-ros/repos.key | sudo gpg --dearmor | sudo tee -a $k > /dev/null
    f="/etc/apt/sources.list.d/nvidia-isaac-ros.list" && sudo touch $f
    s="deb [signed-by=$k] https://urm.nvidia.com/artifactory/sw-isaac-staging-debian-local jammy release-3.3"
    grep -qxF "$s" $f || echo "$s" | sudo tee -a $f
    
    pin_content=$'package: *\nPin: origin isaac.download.nvidia.com\nPin-Priority: 400'
    echo "$pin_content" | sudo tee /etc/apt/preferences.d/isaac-ros
    
    sudo apt-get update
    
    rosdep update && rosdep install --from-paths ${ISAAC_ROS_WS}/src/isaac_ros_manipulation/isaac_ros_manipulation_bringup --ignore-src -y
    
  4. Accept NVIDIA model EULAs before building:

    The build process will download perception models (ESS, FoundationStereo, FoundationPose, SyntheticaDETR, Grounding DINO) that require accepting NVIDIA’s End-User License Agreements (EULAs). Set the following environment variable to accept the terms:

    export ISAAC_ROS_ACCEPT_EULA=1
    

    Note

    By setting this variable, you accept the terms and conditions of the EULAs for the perception models listed above. These models are distributed on the NVIDIA NGC Catalog under NVIDIA’s standard model licenses.

  5. Build the package from source:

    cd ${ISAAC_ROS_WS}
    export MANIPULATOR_INSTALL_ASSETS=1
    export FOUNDATIONSTEREO_MODEL_RES=low_res
    colcon build --symlink-install --packages-up-to isaac_ros_manipulation_bringup
    

    Note

    Setting FOUNDATIONSTEREO_MODEL_RES=low_res is recommended because the build process installs FoundationStereo models. The default high_res model requires 16 GB of GPU memory during TensorRT conversion, while low_res requires 8 GB.

  6. Source the ROS workspace:

    Note

    Make sure to repeat this step in every terminal created inside the Isaac ROS environment.

    Because this package was built from source, the enclosing workspace must be sourced for ROS to be able to find the package’s contents.

    source install/setup.bash
    

Set Up Perception Deep Learning Models#

  1. Prepare the ESS model to run depth estimation:

    ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula
    
  2. The FoundationStereo model is also available for stereo depth estimation. Follow the documentation in FoundationStereo to set up the model in case of any issues.

    export FOUNDATIONSTEREO_MODEL_RES=low_res
    ros2 run isaac_ros_foundationstereo_models_install install_foundationstereo_models.sh --eula \
    --model_res low_res
    
  3. Set up the FoundationPose model. Follow the documentation in FoundationPose to set up the model in case of any issues.

    ros2 run isaac_ros_foundationpose_models_install install_foundationpose_models.sh --eula
    
  4. Set up the SyntheticaDETR model. Follow the documentation in SyntheticaDETR to set up the model in case of any issues.

    ros2 run isaac_ros_rtdetr_models_install install_rtdetr_models.sh --eula
    
  5. Set up the Grounding DINO model. For troubleshooting, refer to the isaac_ros_grounding_dino package documentation.

    ros2 run isaac_ros_grounding_dino_models_install install_grounding_dino_models.sh --eula
    
  6. Prepare the Segment Anything (SAM) ONNX model.

    SAM requires converting the PyTorch weights to ONNX format. This conversion can only be performed on x86 machines. If you intend to run on Jetson, you must first perform the conversion on an x86 machine and then copy the generated ONNX files to the Jetson device.

    1. On an x86 machine, install the segment_anything package via pip:

      pip install --no-deps --break-system-packages git+https://github.com/facebookresearch/segment-anything.git
      
    2. Follow the conversion instructions in the Prepare Segment Anything ONNX Model section of the Segment Anything documentation.

    3. If running on Jetson, copy the generated ONNX model files from the x86 machine to the corresponding location on the Jetson device.

  7. Prepare the Segment Anything 2 (SAM2) ONNX model.

    SAM2 also requires converting the PyTorch weights to ONNX format. This conversion can only be performed on x86 machines. If you intend to run on Jetson, you must first perform the conversion on an x86 machine and then copy the generated ONNX files to the Jetson device.

    1. Follow the conversion instructions in the Prepare Segment Anything2 ONNX Model section of the Segment Anything 2 documentation.

    2. If running on Jetson, copy the generated ONNX model files from the x86 machine to the corresponding location on the Jetson device.

  8. To setup perception models like FoundationPose, SyntheticaDETR and download sample object assets, run the following command. This command will also give users a final verification that all the models are installed correctly:

    export MANIPULATOR_INSTALL_ASSETS=1
    export FOUNDATIONSTEREO_MODEL_RES=low_res
    
    ros2 run isaac_ros_manipulation_asset_bringup setup_perception_models.py --models all
    
    export MANIPULATOR_INSTALL_ASSETS=1
    export FOUNDATIONSTEREO_MODEL_RES=low_res
    
    colcon build --packages-up-to isaac_ros_manipulation_asset_bringup
    

    Note

    Running this command can take up to 15 minutes on Jetson AGX Orin. NVLabs has provided a DOPE pre-trained model using the HOPE dataset. To train your own DOPE model, see here.

    Note

    For details on which models are downloaded and how to set up specific models individually, refer to the isaac ros manipulation asset bringup package documentation.

    Warning

    If you encounter the error ERROR: segment_anything package not found, ensure you have installed the segment_anything package on your x86 machine as described in the SAM model preparation step above.

  9. As a sanity check, run this command to verify that the assets are set up correctly.

    ros2 run isaac_ros_manipulation_asset_bringup setup_perception_models.py --models all
    

    You should see the following output:

    INFO: === Setting up FoundationPose assets ===
    INFO: Mac and Cheese assets already exist at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/isaac_ros_foundationpose/Mac_and_cheese_0_1 - Skipping download
    INFO: === Setting up DOPE model ===
    INFO: DOPE model setup completed successfully
    INFO: === Setting up Segment Anything assets ===
    INFO: Segment Anything assets already exist at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/isaac_ros_segment_anything - Skipping download
    INFO: === Setting up SAM model ===
    INFO: SAM model already exists at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/isaac_ros_segment_anything/vit_b.pth - Skipping download
    INFO: ONNX model already exists at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/models/segment_anything/1/model.onnx - Skipping conversion
    INFO: Config file already exists at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/models/segment_anything/config.pbtxt - Skipping copy
    INFO: SAM model setup completed successfully
    INFO: Setup results:
    INFO: All requested models were set up successfully!
    

Build Robotiq Gripper Dependencies#

  1. Clone the Isaac ROS fork of ros2_robotiq_gripper and tylerjw/serial under ${ISAAC_ROS_WS}/src:

    cd ${ISAAC_ROS_WS}/src && \
      git clone --recursive https://github.com/NVIDIA-ISAAC-ROS/ros2_robotiq_gripper && \
      git clone -b ros2 https://github.com/tylerjw/serial
    

    Note

    • The fork is used to fix this bug in the original repository.

    • The custom serial package build is required because of Issue 21

  2. Use rosdep to install the package’s dependencies:

    rosdep update && rosdep install --from-paths ${ISAAC_ROS_WS}/src/ros2_robotiq_gripper ${ISAAC_ROS_WS}/src/serial --ignore-src -y
    
  3. Build the gripper dependencies:

    cd ${ISAAC_ROS_WS}
    colcon build --symlink-install --packages-select-regex robotiq* serial --cmake-args "-DBUILD_TESTING=OFF" && \
    source install/setup.bash  # Source the workspace after building gripper dependencies
    

Configure Robotiq Gripper#

  1. Before running the pick and place workflow, make sure to follow the instructions in setting up the UR robot and the gripper.

  2. You will have to tweak the Tool I/O settings to User mode as shown below- especially if you see this error message:

    Failed to communicate with the Robotiq gripper
    
https://gitlab-master.nvidia.com/isaac/isaac/-/raw/release-4.3/docs/nvidia-isaac-ros//resources/isaac_ros_docs/reference_workflows/isaac_ros_manipulation/polyscope_2.jpg/

Note

If there are any issues with communication of the robot with the Jetson unit, refer to this section. Run the Driver and Hardware Tests to make sure your robot drivers are in a good state.

Install Python Dependencies#

Install RSL-RL:

sudo apt-get install -y python3-git \
   && pip install --break-system-packages tensordict \
   && pip install --break-system-packages --no-deps rsl-rl-lib==3.1.1