isaac_manipulator_ur_dnn_policy#
Source code available on GitHub.
Overview#
isaac_manipulator_ur_dnn_policy package enables deployment of RL policies trained in Isaac Lab to accomplish specific tasks
on a UR manipulator robot.
Warning
This package uses an experimental RL model that may or may not perform as expected.
Prerequisites#
Follow the setup instructions in Setup Hardware and Software for Real Robot.
3D print the assets for gear assembly, these assets can be found here.
Successfully validated the calibration of your UR robot by running the calibration validation test mentioned at the end of the calibration documentation.
Quickstart#
Set Up Development Environment#
Set up your development environment by following the instructions in getting started.
(Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.
Note
We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the development environment during installation, which will interrupt the quickstart process.
Download Quickstart Assets#
Download quickstart data from NGC:
Make sure required libraries are installed.
sudo apt-get install -y curl jq tar
Then, run these commands to download the asset from NGC:
NGC_ORG="nvidia" NGC_TEAM="isaac" PACKAGE_NAME="isaac_manipulator_ur_dnn_policy" NGC_RESOURCE="isaac_manipulator_ur_dnn_policy_assets" NGC_FILENAME="quickstart.tar.gz" MAJOR_VERSION=4 MINOR_VERSION=0 VERSION_REQ_URL="https://catalog.ngc.nvidia.com/api/resources/versions?orgName=$NGC_ORG&teamName=$NGC_TEAM&name=$NGC_RESOURCE&isPublic=true&pageNumber=0&pageSize=100&sortOrder=CREATED_DATE_DESC" AVAILABLE_VERSIONS=$(curl -s \ -H "Accept: application/json" "$VERSION_REQ_URL") LATEST_VERSION_ID=$(echo $AVAILABLE_VERSIONS | jq -r " .recipeVersions[] | .versionId as \$v | \$v | select(test(\"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\")) | split(\".\") | {major: .[0]|tonumber, minor: .[1]|tonumber, patch: .[2]|tonumber} | select(.major == $MAJOR_VERSION and .minor <= $MINOR_VERSION) | \$v " | sort -V | tail -n 1 ) if [ -z "$LATEST_VERSION_ID" ]; then echo "No corresponding version found for Isaac ROS $MAJOR_VERSION.$MINOR_VERSION" echo "Found versions:" echo $AVAILABLE_VERSIONS | jq -r '.recipeVersions[].versionId' else mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets && \ FILE_REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/\ versions/$LATEST_VERSION_ID/files/$NGC_FILENAME" && \ curl -LO --request GET "${FILE_REQ_URL}" && \ tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets && \ rm ${NGC_FILENAME} fi
Install Python Dependencies#
Install RSL-RL:
sudo apt-get install -y python3-git \
&& pip install --break-system-packages tensordict \
&& pip install --break-system-packages --no-deps rsl-rl-lib==3.1.1
Build isaac_manipulator_ur_dnn_policy#
Activate the Isaac ROS environment:
isaac-ros activateInstall the prebuilt Debian package:
sudo apt-get update
sudo apt-get install -y ros-jazzy-isaac-manipulator-bringup
Clone this repository under
${ISAAC_ROS_WS}/src:cd ${ISAAC_ROS_WS}/src && \ git clone -b release-4.0 https://github.com/NVIDIA-ISAAC-ROS/isaac_manipulator_bringup.git isaac_manipulator_bringup
Activate the Isaac ROS environment:
isaac-ros activateUse
rosdepto install the package’s dependencies:sudo apt-get update
rosdep update && rosdep install --from-paths ${ISAAC_ROS_WS}/src/isaac_manipulator/isaac_manipulator_bringup --ignore-src -y
Build the package from source:
cd ${ISAAC_ROS_WS} && \ colcon build --packages-up-to isaac_manipulator_bringup --base-paths ${ISAAC_ROS_WS}/src/isaac_manipulator/isaac_manipulator_bringup
Source the ROS workspace:
Note
Make sure to repeat this step in every terminal created inside the Isaac ROS environment.
Since this package was built from source, the enclosing workspace must be sourced for ROS to be able to find the package’s contents.
source install/setup.bash
Deploy Insertion Policy#
Make sure you have validated the calibration of your UR robot by running the calibration validation test mentioned at the end of the calibration documentation.
Prepare Segment Anything mode by following steps 1 to 3 from Prepare Segment Anything ONNX Model.
Prepare FoundationPose model by following step 1 from Run Launch File.
Attach the gear to the robot gripper manually by selecting the gear you want to insert and manually opening the gripper, adding the gear in the grasp and closing the gripper.
Start the UR driver:
ros2 launch ur_robot_driver ur_control.launch.py ur_type:=<UR_TYPE> robot_ip:=<ROBOT_IP> initial_joint_controller:=impedance_controller launch_rviz:=False kinematics_params_file:=<calibration_file_path>
Replace
<UR_TYPE>with the type of your UR robot (e.g.,ur10e) and<ROBOT_IP>with the IP address of your robot.Note
Replace
<calibration_file_path>with the path to the calibration file for your robot. To learn how to calibrate your UR robot, please refer to this process.In a separate terminal, run the pose estimation pipeline
ros2 launch isaac_manipulator_bringup gear_assembly_pose_estimation.launch.py goal_frame:=<GOAL_FRAME>
Replace
<GOAL_FRAME>withgear_shaft_small,gear_shaft_medium, orgear_shaft_largedepending on what size gear you are inserting.After the
rqt_image_viewwindow opens, click on the middle of the gear base (not the gear shafts) to segment the object. The object should be highlighted in a contrasting color. The topic name should be/segment_anything/colored_segmentation_maskand the topic for the points should be/segment_anything/colored_segmentation_mask_mouse_left.Note
Make sure to remove the gripper from the scene and make sure the gear base is clearly seen by the camera without being occluded by the gripper or other objects in the scene. Make sure to check the topic names in the
rqt_image_viewwindow to ensure that the point topic is being sent across the ROS2 topic, the segmentation model needs a hint by the user.In a separate terminal, Run the inference pipeline:
ros2 launch isaac_manipulator_ur_dnn_policy inference.launch.py checkpoint:=<GEAR_ASSEMBLY_MODEL_FILE_NAME>
Replace
<GEAR_ASSEMBLY_MODEL_FILE_NAME>withmodel_gripper_85.ptormodel_gripper_140.ptinside ofisaac_ros_assets/isaac_manipulator_ur_dnn_policyfolder file path depending on what size gear you are inserting.Note
If you are using the
robotiq_2f_140gripper, please make this change in the$ISAAC_ROS_WS/isaac_ros_assets/isaac_manipulator_ur_dnn_policy/params/env.yamlfile.action_scale_joint_space: - 0.0325 - 0.0325 - 0.0325 - 0.0325 - 0.0325 - 0.0325
The default value is
0.025which works well with therobotiq_2f_85gripper but the default value for therobotiq_2f_140gripper needs to be0.0325to overcome the higher static friction (striction) and slight changes in the training process. This is a hyper parameter that will be removed in future releases as we add policy updates in upcoming releases.Press play on the Teach Pendant to activate the controller to move the robot and it should insert the gear.
Why does the robot fail to insert the gear ?#
This is a loaded question but it can be decomposed to one of the following issues.
Calibration Errors
The pose that is being sent to the insertion policy might have calibration errors of more than 1 cm. We train our policies to be robust to 1 cm error in pose estimation but larger errors are unlikely to work. One can verify how bad the calibration error is by running the calibration validation test mentioned at the end of the calibration documentation.
As we show in the documentation, the error shown is around 1 cm at the worst case scenario.
Pose Estimation Errors and Pose Estimation Repeatability, Controller Steady State Errors
The pose estimation is a function of the calibration, the depth estimation and the segmentation mask quality.
To perform an integration test, and move to the robot on top of the pose you would like to insert to, please run a test that will do the pose estimation then use cuMotion to move the robot on top of that pose with some z offset.
To run the test, that will do the pose estimation and then use cuMotion to move the robot on top of that pose with some z offset, please run the following command:
export ENABLE_MANIPULATOR_TESTING=manual_on_robot
launch_test $(ros2 pkg prefix --share isaac_manipulator_bringup)/test/test_pose_estimation_error_test.py
This will run a test that will do the pose estimation and then use cuMotion to move the robot on top of that pose with some z offset.
Note
The user is expected to click on top of the peg stand in the rqt_image_view topic, for the segmentation mask to be generated and the pose estimation to be performed.
Warning
If the robot has an error of greater than 1.5 cm, then there is a problem with the calibration error
or the depth quality of the camera. To experiment with ESS and FOUNDATION_STEREO instead of stock Realsense depth, please run the entire gear assembly workflow with these depth estimation models as this test only supports Realsense depth.
As shown in above sections, you can also run the pose estimation repeatability tests to validate that the pose estimation is repeatable.
This test will also test the controller steady state error as a result of using cuMotion and then the controller to execute the trajectory. We do not have an isolated tests to verify the steady state error of the impedance controller but have verified it to be sub cm and sub degree in accuracy.
Please note that controller steady state error can definitely be a huge cause of failure for gear insertion and it is recommended to calibration your UR robot before running any tests and filling up the calibration_file_path parameter in the manipulator configuration file.
Out of distribution start state of the robot before gear insertion
The policy for insertion is trained with only a specific number of Inverse Kinematics (IK) starting solutions. This means that for an UR robot which is a 6 DOF robot and has 8 unique IK solutions for any given pose- it is likely that the user can start the insertion from an out of distribution position.
Given below is the home position we use before running the insertion test detailed in the this section.
Joint Position: Base: 150.93 | Shoulder: -50.19 | Elbow: 88.87 | Wrist 1: -131.34 | Wrist 2: -89.41 | Wrist 3: -116.80
Here is a picture of the same home pose stored in the control box of the robot.
Please make sure before you try to insert the gear that the robot is close to the home pose as shown above.
The other thing to make sure is that the pose we are sending to the insertion policy is also in distribution. We make sure of this by having a check at the policy level where before any inference, we check if the pose in in distribution. If you see a log message like the one shown below, then the pose is not in distribution and the gear insertion is unlikely to work.
[WARNING] [observation_encoder_node]: target position out of distribution
Note
The other thing to be aware of when running the policy for the 140 gripper is that the policy is trained with the wrist_3_link angle being between -10 degrees and -50 degrees.
If the angle is positive and out of the range the performance and success rates will be much worse.
In the future, we will train the policy over a much wider distribution of joint angles and make it more robust to these changes.
We ask the user to manually change the angle to be inside this range during this workflow presently.
Static friction of the robot is too high that the policy is not able to overcome.
This stems from the root cause that the friction behavior in the simulation is not similar to the real robot. Some system identification might be required to find the minimum torque required to move each joint and replicate that in the simulation. We will be releasing a system identification tool in the future that will help the user to find the minimum torque required to move each joint and replicate that in the simulation. If you find that the robot is not moving, it might be that it needs a push to overcome the static friction and get it to start the manipulation process.
Conclusion
So in essence, the user must make sure to do the following things:
Ensure calibration error is not greater than 1 cm
Ensure pose estimation error is not greater than 1.5 cm
Ensure pose estimation repeatability is good
Ensure controller steady state error is sub cm and sub degree
Ensure the robot is not in an OOD start state
Ensure the pose we are sending to the insertion policy is in distribution
If you have verified all of the above, then the gear insertion is likely to work.