Set Up Hardware and Software for Real Robot#

Overview#

This tutorial walks through the process of setting up the hardware and software for a real robot with Isaac for Manipulation.

Set Up UR Robot#

  1. Refer to the Set Up UR Robot section.

Set Up Cameras for Robot#

  1. Refer to the Set Up Cameras for Robot section.

Note

Multiple cameras can help reduce occlusion and noise in the scene and therefore increase the quality and completeness of the 3D reconstruction used for collision avoidance.

While the Pick and Place workflow with multiple cameras runs scene reconstruction for obstacle-aware planning on all cameras, object detection and pose estimation are only enabled on the camera with the lowest index.

Reflective or smooth, featureless surfaces in the environment may increase noise in the depth estimation.

Use of multiple cameras is recommended.

Mixing stereo camera types is untested but may work with modifications to the launch files.

Warning

The obstacle avoidance behavior demonstrated in this tutorial is not a safety function and does not comply with any national or international functional safety standards. When testing obstacle avoidance behavior, do not use human limbs or other living entities.

Set Up Development Environment#

  1. Set up your development environment by following the instructions in getting started.

  2. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.

    Note

    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the development environment during installation, which will interrupt the quickstart process.

Build Isaac for Manipulation Packages#

  1. Activate the Isaac ROS environment:

    isaac-ros activate
    
  2. Install the prebuilt Debian package:

    sudo apt-get update
    
    sudo apt-get install -y ros-jazzy-isaac-manipulator-bringup
    
  1. Clone this repository under ${ISAAC_ROS_WS}/src:

    cd ${ISAAC_ROS_WS}/src && git clone --recursive -b release-4.0 https://github.com/NVIDIA-ISAAC-ROS/isaac_manipulator.git isaac_manipulator
    
  2. Activate the Isaac ROS environment:

    isaac-ros activate
    
  3. Use rosdep to install the package’s dependencies:

    sudo apt-get update
    
    rosdep update && rosdep install --from-paths ${ISAAC_ROS_WS}/src/isaac_manipulator/isaac_manipulator_bringup --ignore-src -y
    
  4. Install the segment_anything package via pip.

    pip install --no-deps --break-system-packages git+https://github.com/facebookresearch/segment-anything.git
    
  5. Build the package from source:

    cd ${ISAAC_ROS_WS}
    export MANIPULATOR_INSTALL_ASSETS=1
    colcon build --symlink-install --packages-up-to isaac_manipulator_bringup
    
  6. Source the ROS workspace:

    Note

    Make sure to repeat this step in every terminal created inside the Isaac ROS environment.

    Because this package was built from source, the enclosing workspace must be sourced for ROS to be able to find the package’s contents.

    source install/setup.bash
    

Set Up Perception Deep Learning Models#

  1. Prepare the ESS model to run depth estimation:

    ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula
    
  2. The FoundationStereo model is also available for stereo depth estimation. Please follow the documentation in FoundationStereo to set up the model in case of any issues.

    export FOUNDATIONSTEREO_MODEL_RES=low_res
    ros2 run isaac_ros_foundationstereo_models_install install_foundationstereo_models.sh --eula \
    --model_res low_res
    
  3. Set up the FoundationPose model. Please follow the documentation in FoundationPose to set up the model in case of any issues.

    ros2 run isaac_ros_foundationpose_models_install install_foundationpose_models.sh --eula
    
  4. Set up the SyntheticaDETR model. Please follow the documentation in SyntheticaDETR to set up the model in case of any issues.

    ros2 run isaac_ros_rtdetr_models_install install_rtdetr_models.sh --eula
    
  5. Set up the Grounding DINO model. Please follow the documentation in GroundingDINO to set up the model in case of any issues.

    ros2 run isaac_ros_grounding_dino_models_install install_grounding_dino_models.sh --eula
    
  6. Install the segment_anything package via pip.

    pip install --no-deps --break-system-packages git+https://github.com/facebookresearch/segment-anything.git
    

    Note

    Note that this is only supported for x86 machines. You will have to copy over the converted onnx files in the right locations on the Jetson machine.

  7. To setup perception models like FoundationPose, SyntheticaDETR and download sample object assets, run the following command. This command will also give users a final verification that all the models are installed correctly:

    Please follow the documentation in GroundingDINO, SegmentAnything and SegmentAnything2 documentation to set up and build the model for the Jetson Thor. You will need to convert the model from the pth format to the onnx format so that Isaac ROS nodes can pick it up.

    export MANIPULATOR_INSTALL_ASSETS=1
    export FOUNDATIONSTEREO_MODEL_RES=low_res
    
    ros2 run isaac_manipulator_asset_bringup setup_perception_models.py --models all
    
    export MANIPULATOR_INSTALL_ASSETS=1
    export FOUNDATIONSTEREO_MODEL_RES=low_res
    
    colcon build --packages-up-to isaac_manipulator_asset_bringup
    

    Note

    Running this command can take up to 15 minutes on Jetson AGX Thor. NVLabs has provided a DOPE pre-trained model using the HOPE dataset. To train your own DOPE model, see here.

    Note

    Please refer to the isaac manipulator asset bringup package for more information. Please note that for the SEGMENT_ANYTHING and SEGMENT_ANYTHING2 models, the user is expected to convert the model from the pth format to the onnx (this only works on x86 machines, so that user will have to copy over the converted onnx files in the right locations on the Jetson machine).

    Note

    If you see this message, ERROR: segment_anything package not found, please install it via: pip install --no-deps --break-system-packages git+https://github.com/facebookresearch/segment-anything.git, then you need to install the segment_anything package and try to build the package again. Note that this is only supported for x86 machines. You will have to copy over the converted onnx files in the right locations on the Jetson machine.

  8. As a sanity check, please run this command to verify that the assets are set up correctly.

    ros2 run isaac_manipulator_asset_bringup setup_perception_models.py --models all
    

    You should see the following output:

    INFO: === Setting up FoundationPose assets ===
    INFO: Mac and Cheese assets already exist at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/isaac_ros_foundationpose/Mac_and_cheese_0_1 - Skipping download
    INFO: === Setting up DOPE model ===
    INFO: DOPE model setup completed successfully
    INFO: === Setting up Segment Anything assets ===
    INFO: Segment Anything assets already exist at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/isaac_ros_segment_anything - Skipping download
    INFO: === Setting up SAM model ===
    INFO: SAM model already exists at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/isaac_ros_segment_anything/vit_b.pth - Skipping download
    INFO: ONNX model already exists at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/models/segment_anything/1/model.onnx - Skipping conversion
    INFO: Config file already exists at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/models/segment_anything/config.pbtxt - Skipping copy
    INFO: SAM model setup completed successfully
    INFO: === Setting up UR DNN Policy assets ===
    INFO: UR DNN Policy assets already exist at /workspaces/isaac_ros-dev/ros_ws/isaac_ros_assets/isaac_manipulator_ur_dnn_policy - Skipping download
    INFO: Setup results:
    INFO: All requested models were set up successfully!
    

Build Robotiq Gripper Dependencies#

  1. Clone the Isaac ROS fork of ros2_robotiq_gripper and tylerjw/serial under ${ISAAC_ROS_WS}/src:

    cd ${ISAAC_ROS_WS}/src && \
      git clone --recursive https://github.com/NVIDIA-ISAAC-ROS/ros2_robotiq_gripper && \
      git clone -b ros2 https://github.com/tylerjw/serial
    

    Note

    • The fork is used to fix this bug in the original repository.

    • The custom serial package build is required because of Issue 21

  2. Build the gripper dependencies:

    cd ${ISAAC_ROS_WS}
    colcon build --symlink-install --packages-select-regex robotiq* serial --cmake-args "-DBUILD_TESTING=OFF" && \
    source install/setup.bash  # Source the workspace after building gripper dependencies
    

Configure Robotiq Gripper#

  1. Before running the gear assembly or pick and place workflow, please make sure to follow the instructions in setting up the UR robot and the gripper.

  2. You will have to tweak the Tool I/O settings to User mode as shown below- especially if you see this error message:

    Failed to communicate with the Robotiq gripper
    
https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/release-4.0/resources/isaac_ros_docs/reference_workflows/isaac_for_manipulation/polyscope_2.jpg/

Note

If there are any issues with communication of the robot with the Jetson unit, please take a look at this section. Please run the Driver and Hardware Tests to make sure your robot drivers are in a good state.

Install Python Dependencies#

Install RSL-RL:

sudo apt-get install -y python3-git \
   && pip install --break-system-packages tensordict \
   && pip install --break-system-packages --no-deps rsl-rl-lib==3.1.1