isaac_ros_cumotion_object_attachment
Source code on GitHub.
Object Attachment
For automating pick-and-place tasks, integrating the geometry of the grasped object into motion planning is crucial
for preventing collisions and ensuring smooth operation. The isaac_ros_cumotion_object_attachment
package plays a key role
in this process by attaching the object’s geometry to the robot’s model when the object is picked up. This geometry
is obtained using a depth camera and approximated with spheres. These spheres are then sent to all Isaac ROS cuMotion
nodes, which incorporate the spheres into the collision representation of the robot. By doing this, the grasped object
avoids collisions with the environment, as demonstrated in the top and bottom panels of Figure 1-b, while also ensuring
correct robot segmentation, as shown in the middle panel of Figure 1-b.
This module’s effectiveness is seen throughout the entire pick-and-place sequence on a physical robot. It handles the object’s geometry during the attachment phase, after which the planning modules move the robot to the desired location. Once the object is placed, the module detaches the geometry from the robot. If the object geometry is not taken into account, as shown in Figure 1-a, it can lead to failures in segmentation or collisions with the environment. These failures highlight the importance of integrating this module to avoid planning disruptions.
The module performs the following steps in sequence:
Given a point cloud computed from one or more depth images, it identifies clusters of points close to the object, using heuristics to select the correct cluster.
A mesh is fit to the point cloud cluster.
The mesh is approximated by a set of collision spheres, which are then attached to collision geometry for the gripper of the robot.
An axis-aligned bounding box (AABB) encompassing the attached object is sent to nvblox for the purpose of clearing its contribution to the Euclidean signed distance field (ESDF), ensuring that the object itself isn’t treated as an obstacle by the planner.
Three approaches are supported for generating the collision spheres for the attached object:
Point cloud clustering: For situations with minimal object information. Provides robust performance across a range of objects.
Cuboid proxy: For use when the object pose and a conservative bounding box is known. The user specifies the dimensions of the box (cuboid), along with an optional object-to-grasp pose; the latter defaults to the identity.
Object mesh: For use when detailed object information is available. User provides the mesh and scale, along with the object-to-grasp pose which is obtained from the inverse of the grasp pose.
These steps, shown in Figure 3, ensure that cuMotion takes into account the geometry of grasped objects, enabling smooth and collision-free motion planning.
Quickstart
Set Up Development Environment
Set up your development environment by following the instructions in getting started.
Clone
isaac_ros_common
under${ISAAC_ROS_WS}/src
.cd ${ISAAC_ROS_WS}/src && \ git clone -b release-3.2 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
(Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.
Note
We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.
Download Quickstart Assets
Download quickstart data from NGC:
Make sure required libraries are installed.
sudo apt-get install -y curl jq tar
Then, run these commands to download the asset from NGC:
NGC_ORG="nvidia" NGC_TEAM="isaac" PACKAGE_NAME="cumotion_object_attachment" NGC_RESOURCE="cumotion_object_attachment_assets" NGC_FILENAME="quickstart.tar.gz" MAJOR_VERSION=3 MINOR_VERSION=2 VERSION_REQ_URL="https://catalog.ngc.nvidia.com/api/resources/versions?orgName=$NGC_ORG&teamName=$NGC_TEAM&name=$NGC_RESOURCE&isPublic=true&pageNumber=0&pageSize=100&sortOrder=CREATED_DATE_DESC" AVAILABLE_VERSIONS=$(curl -s \ -H "Accept: application/json" "$VERSION_REQ_URL") LATEST_VERSION_ID=$(echo $AVAILABLE_VERSIONS | jq -r " .recipeVersions[] | .versionId as \$v | \$v | select(test(\"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\")) | split(\".\") | {major: .[0]|tonumber, minor: .[1]|tonumber, patch: .[2]|tonumber} | select(.major == $MAJOR_VERSION and .minor <= $MINOR_VERSION) | \$v " | sort -V | tail -n 1 ) if [ -z "$LATEST_VERSION_ID" ]; then echo "No corresponding version found for Isaac ROS $MAJOR_VERSION.$MINOR_VERSION" echo "Found versions:" echo $AVAILABLE_VERSIONS | jq -r '.recipeVersions[].versionId' else mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets && \ FILE_REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/\ versions/$LATEST_VERSION_ID/files/$NGC_FILENAME" && \ curl -LO --request GET "${FILE_REQ_URL}" && \ tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets && \ rm ${NGC_FILENAME} fi
Build isaac_ros_cumotion_object_attachment
Launch the Docker container using the
run_dev.sh
script:cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Install the prebuilt Debian package:
sudo apt-get update
sudo apt-get install -y ros-humble-isaac-ros-cumotion-object-attachment
Clone this repository under
${ISAAC_ROS_WS}/src
.cd ${ISAAC_ROS_WS}/src && \ git clone --recurse-submodules -b release-3.2 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_cumotion.git isaac_ros_cumotion
Launch the Docker container using the
run_dev.sh
script:cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Use
rosdep
to install the package’s dependencies:sudo apt-get update
rosdep update && rosdep install --from-paths ${ISAAC_ROS_WS}/src/isaac_ros_cumotion/isaac_ros_cumotion_object_attachment --ignore-src -y
Build the package from source:
cd ${ISAAC_ROS_WS} && \ colcon build --packages-up-to isaac_ros_cumotion_object_attachment
Source the ROS workspace:
Note
Make sure to repeat this step in every terminal created inside the Docker container.
Because this package was built from source, the enclosing workspace must be sourced for ROS to be able to find the package’s contents.
source install/setup.bash
Run Launch File for isaac_ros_cumotion_object_attachment
Within the Docker container, execute the following launch file to initialize the action server that triggers the object attachment/detachment workflow:
ros2 launch isaac_ros_cumotion_object_attachment cumotion_bringup.launch.py
Note
Wait until cuMotion is ready for planning queries and the nodes
robot_segmenter_node
andattach_object_server_node
are initialized.Open a second terminal in the Docker container and navigate to the development environment:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Use the rosbag to simulate an image stream that includes a grasped object by running the following command:
ros2 bag play --clock -l ${ISAAC_ROS_WS}/isaac_ros_assets/mac_cheese \ --remap /cumotion/camera_1/world_depth:=/segmented_depth_image
Open a third terminal in the Docker container and navigate to the development environment:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Attach and detach the grasped object using one of the following modes:
Cyclic Mode: Repeatedly attach and detach the object in a continuous loop:
ros2 run isaac_ros_cumotion_object_attachment attach_object_client_node \ --ros-args -p object_attachment_mode:=cycle
Single Action Mode: Perform a single attach or detach action.
Attach the object:
ros2 run isaac_ros_cumotion_object_attachment attach_object_client_node \ --ros-args -p object_attachment_mode:=once \ -p object_attachment_attach_object:=True
Detach the object:
ros2 run isaac_ros_cumotion_object_attachment attach_object_client_node \ --ros-args -p object_attachment_mode:=once \ -p object_attachment_attach_object:=False
Note
The following parameters can also be adjusted in the attach_object_client_node
to
try out the other object attachment approaches:
ROS Parameter |
Type |
Default |
Description |
---|---|---|---|
|
|
|
sphere: Represents an object that is either a perfect sphere or reconstructed using a point cloud that lies inside a spherical boundary. cuboid: Represents an object that is either a perfect cuboid or is approximated by a cuboid (e.g., the bounding box of the object mesh). custom_mesh: Represents an object that is reconstructed from a user-provided custom mesh, allowing for arbitrary and complex geometries beyond standard shapes. |
|
|
|
Path of the mesh file |
|
|
|
Pose of the attached object with respect to the gripper frame. The order is translation x, y, z followed by a quaternion in order x, y, z, w |
|
|
|
Represents the dimensions of the cuboid if cuboid is chosen as shape. If mesh is chosen, this would represent the scale of the mesh in different dimensions. |
Visualize Results of isaac_ros_cumotion_object_attachment
Open a new terminal inside the Docker container:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Launch
RViz
for visualization:rviz2
In
RViz
, set theFixed Frame
tobase
. Then click Add and select By topic to visualize the following topics:Segmented Mask: Choose the
Image
option from/cumotion/camera_1/robot_mask
to visualize the segmented mask that includes both the robot and the attached object.Depth Image: Choose the
Image
option from/cumotion/camera_1/world_depth
to visualize the depth image, excluding the robot and the attached object.Camera Feed: Add the camera feed by selecting
Camera
from/camera_1/color/image_raw
. This helps verify alignment between the robot mask and the actual camera image.Collision Spheres: Select
MarkerArray
fromviz_all_spheres/segmenter_attach_object
to visualize all collision spheres managed by therobot_segmenter_node
, including those associated with both the robot and the attached object.
Note
Visualization may take up to 1 minute to appear due to initial warm-up time. During this period, the depth mask might lag behind the camera image.
Open another terminal inside the Docker container and repeat step 1.
In
RViz
, visualize the collision spheres managed by the cuMotion planner node:Select
MarkerArray
fromviz_all_spheres/planner_attach_object
. This topic visualizes all collision spheres managed by the cuMotion planner node, which includes spheres associated with both the robot and the advertised object.
The success of these visualizations indicates that the
robot_segmenter_node
andcumotion_planner_node
received the new object collision spheres via the action server and updated their kinematics accordingly. If running theattach_object_client_node
incycle
mode, you should see spheres being added and removed inRViz
, indicating successful attachment and detachment of object spheres.Note
If the
action_names
argument incumotion_bringup.launch.py
is changed, the topics to visualize collision spheres will follow the naming conventionviz_all_spheres/<action_name>
.
Troubleshooting
Isaac ROS Troubleshooting
For solutions to problems with Isaac ROS, see troubleshooting.
API
AttachObjectServer
ROS Parameters
ROS Parameter |
Type |
Default |
Description |
---|---|---|---|
|
|
|
Path or name of the XRDF file for the robot. |
|
|
|
Path to the URDF file for the robot. |
|
|
|
CUDA device index to use |
|
|
|
Maximum allowed delay (in seconds) for which depth image and joint state messages are considered synchronized. |
|
|
|
Maximum duration (in seconds) for which to block while waiting for a transform to become available. |
|
|
|
Topic to subscribe to for the robot’s joint states. |
|
|
|
List of topics to subscribe to for input depth images. |
|
|
|
List of topics to subscribe to for camera info corresponding to the input depth image streams. |
|
|
|
Names of actions used by dependent action servers in other nodes. |
|
|
|
Names of actions used by dependent action servers in other nodes. |
|
|
|
Radius (in meters) centered around the gripper’s origin for depth point clustering. |
|
|
|
Radius (in meters) to use for points sampled on surface of the object. A smaller radius will allow for generating motions very close to obstacles. |
|
|
|
Whether to bypass clustering entirely. |
|
|
|
Minimum number of neighboring points required for a point to be considered a core point in HDBSCAN. |
|
|
|
Smallest group size that will be considered a cluster in HDBSCAN. |
|
|
|
Threshold distance for merging clusters in HDBSCAN. |
|
|
|
Number of top clusters to select after clustering. |
|
|
|
Whether to group clusters after selection. |
|
|
|
Minimum number of points required in a cluster. |
|
|
|
Whether the Euclidean Signed Distance Function (ESDF) should be updated upon receiving this service call or not. |
|
|
|
Amount by which to pad each dimension of the axis-aligned bounding box enclosing the object, in meters, for the purpose of ESDF clearing. |
ROS Topics Subscribed
ROS Topic |
Interface |
Description |
---|---|---|
|
Joint states of the robot. |
|
|
Input depth images including the object to be attached. |
|
|
Topics to subscribe to for camera info corresponding to the input depth image streams. |
ROS Topics Published
ROS Topic |
Interface |
Description |
---|---|---|
|
Origin of the object in the robot’s frame. |
|
|
Nearby points detected around the gripper. |
|
|
Clustered points representing the object. |
|
|
Final point cloud representing the object. |
|
|
Collision spheres representing the robot. |
ROS Services Requested
ROS Service |
Interface |
Description |
---|---|---|
|
Service that is used to clear the axis-aligned bounding box (AABB) of the object in the Euclidean Signed Distance Function (ESDF). |
ROS Actions Advertised
ROS Action |
Interface |
Description |
---|---|---|
|
AttachObject.action |
This action triggers the attachment or detachment of an object in the robot’s kinematic model.
|
|
UpdateLinkSpheres.action |
This action updates the kinematic model of the robot by attaching an object and specifying its collision geometry using a set of spheres.
|