Manipulation

Overview

In robotics, the term “manipulator” has traditionally had a very broad definition, referring not only to robots used for repositioning or modifying physical objects but also to robot arms generally, even when used in non-contact applications such as automated optical inspection. Increasingly, robotic manipulators are being used for tasks where their motion must adapt to changing conditions as perceived via cameras or other sensors.

An example of perception-driven manipulation is unstructured picking, where the robot might be tasked with picking a variety of objects whose positions are not known in advance. This requires modules to detect a given object, determine its pose, compute a suitable grasp (dependent on the installed gripper), plan a trajectory to bring the gripper to the desired pose while avoiding collisions, execute the planned trajectory, grasp the object, and finally plan a trajectory to the desired place point, all in real time. More challenging scenarios might require the planner to adapt to a changing environment where the presence and positions of obstacles also vary.

Isaac Manipulator consists of a set of components and reference workflows for advanced perception-driven manipulation. These components include state-of-the-art packages for object detection and object pose estimation, as well as obstacle-aware motion generation, described in more detail below.

NVIDIA cuMotion

NVIDIA cuMotion is a software package for computing optimal-time, minimal-jerk trajectories for serial robot arms. It is capable of avoiding collisions with obstacles represented as a set of cuboids, meshes, signed distance fields (computed from one or more depth image streams using nvblox), or any combination of the three. cuMotion leverages NVIDIA hardware acceleration to compute such trajectories in a fraction of a second on Jetson AGX Orin or tens of milliseconds on a discrete GPU such as RTX 6000 (Ada Generation).

In the current release, the planning capabilities of cuMotion are exposed via a plugin for MoveIt 2. In addition, a ROS 2 node is provided that uses the current joint configuration of the robot to perform segmentation in order to filter out the robot from a depth image, as needed to reconstruct obstacles in the environment without spurious contributions from the robot itself.

cuMotion incorporates technology developed by NVIDIA Research and leverages the cuRobo library internally.

See Isaac ROS cuMotion for more on cuMotion and instructions for getting started.

Robot Configuration

In order to generate motion or perform segmentation for a given robot, cuMotion requires two files:

  1. A universal/unified robot description format (URDF) file, describing basic kinematics.

  2. An extended robot description format (XRDF) file, supplementing the URDF file with collision geometry (as a set of collision spheres), a definition of the configuration space (c-space) used for planning, potential modifiers to the URDF file, and other such data.

See the following specification for details on XRDF.

For convenience, a visual Robot Description Editor with XRDF support is available in Isaac Sim 4.0 and later.

Warning

In the current release, isaac_ros_cumotion also accepts robot description files in a legacy cuRobo format, but this support will be discontinued in a future release. Please use XRDF instead.

Grasp Authoring

With the capability to identify specific objects in the world along with their poses, the process of computing high quality grasps between a specific gripper and object can be taken offline. An isaac_grasp file is a YAML file that follows a human-readable format for storing predefined grasps.

See the following specification for details on isaac_grasp files.

For convenience, Isaac Sim 4.2 and later includes a Grasp Editor that supports hand authoring grasps and exporting them to an isaac_grasp file. Additionally, it supports importing an isaac_grasp file for the purpose of visualizing and validating the included grasps.

Pick and Place

The Pick and Place workflow is a reference implementation for perception-driven manipulation using Isaac ROS packages. It demonstrates how to use the Isaac ROS Object Detection and Isaac ROS Pose Estimation packages to detect objects and estimate their poses, respectively, and how to use Isaac ROS cuMotion to plan and execute trajectories for picking and placing objects.

See the following page for more information on the Pick and Place reference workflow.

Tutorials

To continue your exploration, check out the following suggested tutorials:

More advanced tutorials accompany the Isaac Manipulator reference workflows: