Isaac for Manipulation Camera Calibration#
These instructions are for extrinsics calibration of a RealSense camera with a UR e-Series robot (tested with UR5e and UR10e). For dual-camera setups, please follow the Calibration Procedure twice (once for each camera), with only one camera plugged into the system each time.
Calibration Target#
Supported Patterns#
Both ArUco and ChArUco calibration targets are supported with this tutorial. For best results, we recommend a ChArUco pattern with an odd number of rows and a high number of markers, printed on the largest sized board that is suitable for the robot being calibrated.
The following ArUco pattern dictionaries are supported.
DICT_4X4_250 |
DICT_5X5_250 |
DICT_6X6_250 |
DICT_7X7_250 |
DICT_ARUCO_ORIGINAL |
Target Printing#
The calibration targets must be rigid without any noticeable bending when flat and during operation. We recommend using aluminum/LDPE composite for the required flatness and stiffness.
Note
We recommend using calib.io to obtain high quality calibration targets. The targets used in this tutorial were purchased from calib.io.
If printing from a different source, ensure that the printed targets are:
Non-glossy to minimize light reflections and glare.
Black and white in color.
The calibration targets used in this tutorial are from calib.io and shown below: left - UR5e target, right - UR10e target.
The specifications for the calibration targets used in this tutorial are as follows:
Robot type |
Board dimension |
ChArUco resolution |
ArUco dictionary |
|---|---|---|---|
UR5e |
200x300 (mm) |
14x9 |
DICT_5X5_250 |
UR10e |
300x400 (mm) |
12x9 |
DICT_5X5_250 |
Physical Setup#
The calibration board should be mounted firmly to the end of the manipulator arm such that the board does not shift its position when the arm is moved. Any relative movement between the board and arm during calibration will result in inaccurate results. We recommend using a mount such as the one shown here.
A calibration target attached to a UR10e robot using the mount is shown below.
Note
The calibration board can also be held by the robot using a gripper. If using this method to hold the target and a portion of the ChAuCo pattern is obstructed, ensure that the calibration tool is able to successfully detect the target. Refer to Step 11 of Calibration Procedure for instructions on how to visualize the detection of the calibration target.
Software Setup#
Set up your development environment by following the instructions in: Developer Environment Setup.
Set up the UR Robot by following the instructions in: Universal Robots Robot.
Complete the RealSense setup tutorial if using a RealSense camera.
Clone
isaac_manipulatorunder${ISAAC_ROS_WS}/src.cd ${ISAAC_ROS_WS}/src && \ git clone --recursive -b release-4.0 https://github.com/NVIDIA-ISAAC-ROS/isaac_manipulator.git isaac_manipulator
Activate the Isaac ROS environment.
isaac-ros activateBuild from source and install the moveit_calibration package.
cd ${ISAAC_ROS_WS} git clone -b ros2-jazzy-updates https://github.com/iconner22/moveit_calibration.git src/third_party/moveit_calibration rosdep update && rosdep install -r --from-paths src/third_party/moveit_calibration --ignore-src --rosdistro ${ROS_DISTRO} -y colcon build --symlink-install --packages-up-to-regex moveit_calibration* --cmake-args -DCMAKE_BUILD_TYPE=Release source install/setup.bash
Note
The
moveit_calibrationpackage that we use has been updated to support ROS 2 Jazzy.Install the Isaac for Manipulation packages. There are two options:
Use
rosdepto install the package’s dependencies:sudo apt-get update
rosdep update && \ rosdep install -i -r \ --from-paths ${ISAAC_ROS_WS}/src/isaac_manipulator/isaac_manipulator_bringup/ \ --rosdistro jazzy -y
Build and source the ROS workspace:
cd ${ISAAC_ROS_WS} colcon build --symlink-install --packages-up-to isaac_manipulator_bringup source install/setup.bash
Get
isaac_manipulator_bringupand its dependencies.sudo apt-get update
sudo apt-get install -y ros-jazzy-isaac-manipulator-bringup
Calibration Procedure#
Note
Before proceeding, complete the steps in Software Setup.
Open a new terminal window and activate the Isaac ROS environment.
isaac-ros activateNote
We recommend setting a
ROS_DOMAIN_IDviaexport ROS_DOMAIN_ID=<ID_NUMBER>for every new terminal you run ROS commands to avoid interference with other computers in the same network (ROS Guide).Inside the container, source the workspace.
cd ${ISAAC_ROS_WS} && \ source install/setup.bash
Launch the UR robot driver:
ros2 launch ur_robot_driver ur_control.launch.py ur_type:=<ROBOT_TYPE> robot_ip:=<ROBOT_IP_ADDRESS> launch_rviz:=false
The supported robot types are: [
ur3e,ur5e,ur10e,ur16e]Open a second terminal window and activate the Isaac ROS environment.
isaac-ros activateInside the container, source the workspace and launch the camera feed.
Launch the camera feed.
source install/setup.bash ros2 launch isaac_manipulator_bringup camera_calibration.launch.py camera_type:=REALSENSE
This launch file launches the camera pipeline for RealSense. After launch you can verify that camera topics are publishing data, e.g.,
/camera_1/color/image_raw.
Open a third terminal window and activate the Isaac ROS environment.
isaac-ros activateNote
If the RViz window with the HandEyeCalibration crashes, restart the window by rerunning the commands below.
Inside the container, source the workspace and launch the robot with RViz.
source install/setup.bash ros2 launch isaac_ros_cumotion_examples ur.launch.py ur_type:=<ROBOT_TYPE> robot_ip:=<ROBOT_IP_ADDRESS> launch_rviz:=true
The supported robot types are: [
ur3e,ur5e,ur10e,ur16e]Close the
MotionPlanningpanel on the left side of the window. This must be done to allow for image panels to be added to RViz in a later step. If this step is not done, the application may crash.In the RViz window that opens, click on Add > By display type > moveit_calibration_gui > HandEyeCalibration. Click on OK to add the calibration panel to the window. Verify that you have a new panel added to the left side of the RViz window titled HandEye Calibration.
In the HandEye Calibration panel, click on the Target tab and set the following parameters.
Target Type
HandEyeTarget/Charuco
squares, X
Number of squares in calibration target along X direction
squares, Y
Number of squares in calibration target along Y direction
marker size (px)
1500
square size (px)
2400
margin size (px)
60
marker border (bits)
Number of bits around marker border
ArUco dictionary
DICT_5X5_250
longest board side (m)
Measured length of longest side of the board pattern in meters
measured marker size (m)
Measured length of marker in meters
The following values may be used directly if using the calib.io calibration targets that were recommended in the Calibration Target section above.
Target Type
HandEyeTarget/Charucosquares, X
12(for UR10e target),14(for UR5e target)squares, Y
9marker size (px)
1500square size (px)
2400margin size (px)
60marker border (bits)
1ArUco dictionary
DICT_5X5_250longest board side (m)
0.36(for UR10e target),0.28(for UR5e target)measured marker size (m)
0.022(for UR10e target),0.015(for UR5e target)After the above values have been set, click on Create Target.
Lastly, click on the Camera Image Topic drop down menu and select the appropriate image topic. For RealSense the topic is
/camera_1/color/image_raw.
In the HandEye Calibration panel, click on the Context tab and set the following parameters.
Sensor Configuration
Eye-to-hand
Sensor frame
camera_1_linkObject frame
handeye_targetEnd-effector frame
wrist_3_linkRobot base frame
worldNote
After setting the
Sensor frame, the UI may freeze for a few seconds before becoming responsive again.
Add a panel to visualize the calibration target detection by clicking Add > By topic > /handeye_calibration > /target_detection > Image. Click on OK to add.
Verify that you have a new panel added to the left side of the RViz window titled Image which shows the detected pose of the calibration target.
Add TF visualizations for the camera and target. In the RViz window, click on Add > By display type > rviz_default_plugins > TF. Click on OK to add the visualization to the window. Verify that you have an entry titled TF in the Displays panel on the left side of the window. Expand the TF entry and then click on Frames and deselect the option titled All Enabled. Scroll through the list of frames and re-enable
camera_1_linkandhandeye_target. These are the camera frame and target frame, respectively.In the HandEye Calibration panel, click on the Context tab and use the sliders under Camera Pose Initial Guess to provide an initial pose estimation. Adjust the translation and rotation values until the TF visualizations on the right side of the window roughly correspond with the position of the camera and target board in real life with respect to the manipulator arm.
In the HandEye Calibration panel, click on the Calibrate tab. Make sure the calibration board is visible in the camera image. Move the manipulator arm to a desired pose and click Take sample. If the sample is successfully taken, it will it appear in the box titled Pose samples. After a minimum of 5 samples are taken, the tool will compute a camera pose together with a corresponding reprojection error (it appears at the bottom-left of the HandEyeCalibration window). Continue to move the manipulator arm and collect samples until the desired number of samples is achieved.
Note
- Adding more diverse samples will typically lead to a more robust calibration. It is recommended that a minimum of 2 translational poses are used with approximately 10 rotations for each translation.
The rotations should be made by moving the board along all three axes (roll, pitch, yaw) at the same time to ensure each rotation is sufficiently different from the prior. Additional rotations can be added to improve the accuracy of the calibration. The translational poses should be positioned within the workspace of the manipulator arm.
- Verify that reprojection error is low enough for your application, and that the camera
position in RViz looks reasonable and aligns with the simulated robot. The procedure described in these instructions have been tested to provide reprojection error under 0.002 m and 0.002 rad.
After the desired number of samples are taken, save the calibration results by clicking Save camera pose.
Note
Different solvers yield different calibration results, we recommend experimenting with different solvers to see which yields the best results for your use case. To select a different solver, click on the drop down menu next to AX=XB Solver and click on the desired solver. Then click on the Solve button to recompute the calibration results with the selected solver.
The
calibration.launch.pyoutput from the calibration tool will include a code snippet similar to the following example (values included for illustration only as they will differ for different setups):""" Static transform publisher acquired via MoveIt 2 hand-eye calibration """ """ EYE-TO-HAND: world -> camera """ from launch import LaunchDescription from launch_ros.actions import Node def generate_launch_description() -> LaunchDescription: nodes = [ Node( package="tf2_ros", executable="static_transform_publisher", output="log", arguments=[ "--frame-id", "world", "--child-frame-id", "camera", "--x", "1.77278", "--y", "0.939827", "--z", "-0.0753478", "--qx", "-0.128117", "--qy", "-0.0317539", "--qz", "0.955077", "--qw", "-0.265339", # "--roll", # "0.132507", # "--pitch", # "-0.229891", # "--yaw", # "-2.5843", ], ), ] return LaunchDescription(nodes)
The calibrated pose values can now be used with the rest of the Isaac for Manipulation suite by updating the values in static_transforms.launch.py as described here, and specifying the frame name defined during Camera Setup in the
--child-frame-idfield. For dual-camera setups, make sure to use a different frame name for each calibrated camera.
Validating Calibration#
To validate the calibration, run the following procedure.
First perform 2 calibrations for the same camera. Take the transforms you get and create a json file with schema given here:
[ { "parent_frame": "world", "child_frame": "camera_1_link", "translation": { "x": -0.412198, "y": 0.238021, "z": 0.138175 }, "rotation": { "x": -0.0527208, "y": -0.00452152, "z": 0.998266, "w": -0.0257981 }, "imager_used": "rgb_realsense" }, { "parent_frame": "world", "child_frame": "camera_1_link", "translation": { "x": -0.413251, "y": 0.235007, "z": 0.132836 }, "rotation": { "x": -0.0524579, "y": -0.00387522, "z": 0.998361, "w": -0.0225519 }, "imager_used": "realsense_infra2" }, { "parent_frame": "world", "child_frame": "camera_1_link", "translation": { "x": -0.413318, "y": 0.234351, "z": 0.132421 }, "rotation": { "x": -0.0525905, "y": -0.00255462, "z": 0.99835, "w": -0.0229073 }, "imager_used": "realsense_infra1" } ]
Store this file somewhere and reference it in an environment variable.
export CALIBRATION_TRANSFORMS_FILE=/path/to/your/calibration_transforms.json export ENABLE_MANIPULATOR_TESTING=manual_on_robot
First we record a calibration dataset that stores RGB and infrared camera streams, TFs (to store extrinsic calibration from robot to the camera, and inter-camera transforms between RGB and infrared streams) and camera information (to record camera intrinsics).
Then we perform a test that visualizes errors in the following
Target pose estimation using OpenCV’s Perspective in Pose solver.
The factory calibrated intrinsic transforms between RGB and infrared streams.
Using 2 extrinsic calibrations to get error bounds of the calibration process.
Before running the calibration data recording script, make sure to have the following: - The calibration target seen by the camera and is static. - The target should be placed such that it takes over as much of the image / field of view as possible.
Run this command that will record the data (this will take 60 seconds) and then will run the test.
export ENABLE_MANIPULATOR_TESTING=manual_on_robot export NUM_SQUARE_HEIGHT=9 # This is for the UR5e target, UR10e target is 12 export NUM_SQUARE_WIDTH=14 # This is for the UR5e target, UR10e target is 14 export LONGER_SIDE_M=0.28 # This is for the UR5e target, UR10e target is 0.36 export MARKER_SIZE_M=0.015 # This is for the UR5e target, UR10e target is 0.022 bash $(ros2 pkg prefix --share isaac_manipulator_bringup)/test/run_calibration_validation.sh
You will see the following output in json files and images visualizing the error. This folder will be the ones indicated as the output folder in the above bash script.
"extrinsics_error": { "error_mean_x_cm": -0.8119825981623039, "error_std_x_cm": 0.004882939882471144, "error_mean_y_cm": -0.558025644231066, "error_std_y_cm": 0.004597599422036537, "error_mean_z_cm": 0.3173449385777736, "error_std_z_cm": 0.04592212495318613, "projected_frame": "camera_1_infra2_optical_frame", "home_frame": "camera_1_infra1_optical_frame", "notes": "This error is the same as above dict however we now use the extrinsics transform 1 calibration point to world and another one used for world back to camera_1_link instead of using the target pose estimation error" }
Perhaps the most important thing is to look at the images and observe how the error increases once you add more computations in the pose estimation process from OpenCV’s solver, to using intrinsics and then to using 2 extrinsic calibrations.
As an example, this is what the OpenCV pose error visualized looks like. Here, you just see red dots, since the blue dots that are projected match the pixel value exactly.
This is what the error looks like once you add the intrinsics calibration. You start to see a small error.
This is what the error looks like once you add the 2 extrinsic calibrations. You start to see a bigger error that can be around 1 cm at the worst case scenario. You will notice that the error is around half the size of the corner squares and each corner is 2 cm which puts our camera to target distance at around 1.5 cm. This number is inflated as the error in the 2 calibration compounds on each other, so the actual error might be closer to 5 mm.
Careful calibrations with a wide coverage can help drive down this error.
This also motivates the need for data driven robotics algorithms that are robust to such errors in calibration.