For automating pick-and-place tasks, integrating the geometry of the grasped object into motion planning is crucial
for preventing collisions and ensuring smooth operation. The isaac_ros_cumotion_object_attachment package plays a key role
in this process by attaching the object’s geometry to the robot’s model when the object is picked up. This geometry
is obtained using a depth camera and approximated with spheres. These spheres are then sent to all Isaac ROS cuMotion
nodes, which incorporate the spheres into the collision representation of the robot. By doing this, the grasped object
avoids collisions with the environment, as demonstrated in the top and bottom panels of Figure 1-b, while also ensuring
correct robot segmentation, as shown in the middle panel of Figure 1-b.
This module’s effectiveness is seen throughout the entire pick-and-place sequence on a physical robot. It handles the
object’s geometry during the attachment phase, after which the planning modules move the robot to the desired location.
Once the object is placed, the module detaches the geometry from the robot. If the object geometry is not taken into
account, as shown in Figure 1-a, it can lead to failures in segmentation or collisions with the environment. These
failures highlight the importance of integrating this module to avoid planning disruptions.
Figure 1: Motion planning and segmentation with and without accounting for grasped object geometry in cuMotion.
The module performs the following steps in sequence:
Given a point cloud computed from one or more depth images, it identifies clusters of points close to the object,
using heuristics to select the correct cluster.
A mesh is fit to the point cloud cluster.
The mesh is approximated by a set of collision spheres, which are then attached to collision geometry for the
gripper of the robot.
An axis-aligned bounding box (AABB) encompassing the attached object is sent to nvblox for the purpose
of clearing its contribution to the Euclidean signed distance field (ESDF), ensuring that the object itself isn’t treated as an
obstacle by the planner.
Three approaches are supported for generating the collision spheres for the attached object:
Point cloud clustering: For situations with minimal object information. Provides
robust performance across a range of objects.
Cuboid proxy: For use when the object pose and a conservative bounding box is known. The user specifies the
dimensions of the box (cuboid), along with an optional object-to-grasp pose; the latter defaults to the identity.
Object mesh: For use when detailed object information is available. User provides the mesh and scale,
along with the object-to-grasp pose which is obtained from the inverse of the grasp pose.
Figure 2: Object attachment results using the point cloud, cuboid, and mesh approaches, respectively.
These steps, shown in Figure 3, ensure that cuMotion takes into account the geometry of grasped objects,
enabling smooth and collision-free motion planning.
Figure 3: Object attachment workflow for multiple objects, from point cloud to collision spheres.
(Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.
Note
We strongly recommend installing all sensor dependencies before starting any quickstarts.
Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.
Make sure to repeat this step in every terminal created inside the Docker container.
Because this package was built from source, the enclosing workspace must be sourced for ROS to be able to find the package’s contents.
sourceinstall/setup.bash
Run Launch File for isaac_ros_cumotion_object_attachment
Within the Docker container, execute the following launch file to initialize the action server that triggers the object attachment/detachment workflow:
The following parameters can also be adjusted in the attach_object_client_node to
try out the other object attachment approaches:
ROS Parameter
Type
Default
Description
object_attachment_object_shape
string
'sphere'
sphere: Represents an object that is either a perfect sphere
or reconstructed using a point cloud that lies inside a spherical boundary.
cuboid: Represents an object that is either a perfect cuboid or
is approximated by a cuboid (e.g., the bounding box of the object mesh).
custom_mesh: Represents an object that is reconstructed from a user-provided
custom mesh, allowing for arbitrary and complex geometries beyond standard shapes.
object_attachment_mesh_resource
string
'/workspaces/isaac_ros-dev/temp/nontextured.stl'
Path of the mesh file
object_attachment_object_pose
floatarray
[0.0,0.0,0.0,0.0,0.0,0.0,1.0]
Pose of the attached object with respect to the gripper frame. The order is
translation x, y, z followed by a quaternion in order x, y, z, w
object_attachment_object_scale
floatarray
[0.09,0.185,0.035]
Represents the dimensions of the cuboid if cuboid is chosen as shape.
If mesh is chosen, this would represent the scale of the mesh in different
dimensions.
Visualize Results of isaac_ros_cumotion_object_attachment
In RViz, set the FixedFrame to base. Then click Add and select By topic to visualize the following topics:
Segmented Mask: Choose the Image option from /cumotion/camera_1/robot_mask to visualize the segmented mask that includes both the robot and the attached object.
Depth Image: Choose the Image option from /cumotion/camera_1/world_depth to visualize the depth image, excluding the robot and the attached object.
Camera Feed: Add the camera feed by selecting Camera from /camera_1/color/image_raw. This helps verify alignment between the robot mask and the actual camera image.
Collision Spheres: Select MarkerArray from viz_all_spheres/segmenter_attach_object to visualize all collision spheres managed by the robot_segmenter_node, including those associated with both the robot and the attached object.
Note
Visualization may take up to 1 minute to appear due to initial warm-up time. During this period, the depth mask might lag behind the camera image.
Figure 4: Cyclic object attachment and detachment in RViz.
Top left: RGB image with robot and object collision spheres.
Bottom left: Depth image mask showing robot-object segmentation.
Right: 3D view of robot and object collision spheres.
Open another terminal inside the Docker container and repeat step 1.
In RViz, visualize the collision spheres managed by the cuMotion planner node:
Select MarkerArray from viz_all_spheres/planner_attach_object. This topic visualizes all collision spheres managed by the cuMotion planner node, which includes spheres associated with both the robot and the advertised object.
The success of these visualizations indicates that the robot_segmenter_node and cumotion_planner_node received the new object collision spheres via the action server and updated their kinematics accordingly. If running the attach_object_client_node in cycle mode, you should see spheres being added and removed in RViz, indicating successful attachment and detachment of object spheres.
Note
If the action_names argument in cumotion_bringup.launch.py is changed, the topics to visualize collision spheres will follow the naming convention viz_all_spheres/<action_name>.