isaac_ros_cumotion
Source code on GitHub.
Motion Generation
The motion generation capabilities provided by the cuMotion planner node are exposed via a plugin for MoveIt 2. Please see the corresponding quickstart to get started.
Robot Segmentation
Quickstart
Set Up Development Environment
Set up your development environment by following the instructions in getting started.
Clone
isaac_ros_common
under${ISAAC_ROS_WS}/src
.cd ${ISAAC_ROS_WS}/src && \ git clone -b release-3.1 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
(Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.
Warning
We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.
Download Quickstart Assets for Robot Segmentation
Download the
r2b_robotarm
dataset from the r2b 2024 dataset on NGCPlace the dataset at
${ISAAC_ROS_WS}/isaac_ros_assets/r2b_2024/r2b_robotarm
.
Build isaac_ros_cumotion
Launch the Docker container using the
run_dev.sh
script:cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Install the prebuilt Debian package:
sudo apt-get install -y ros-humble-isaac-ros-cumotion
Clone this repository under
${ISAAC_ROS_WS}/src
:cd ${ISAAC_ROS_WS}/src && \ git clone --recurse-submodules -b release-3.1 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_cumotion.git isaac_ros_cumotion
Launch the Docker container using the
run_dev.sh
script:cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Use
rosdep
to install the package’s dependencies:rosdep install --from-paths ${ISAAC_ROS_WS}/src/isaac_ros_cumotion --ignore-src -y
Build the package from source:
cd ${ISAAC_ROS_WS} && \ colcon build --symlink-install --packages-up-to isaac_ros_cumotion
Source the ROS workspace:
Note
Make sure to repeat this step in every terminal created inside the Docker container.
Because this package was built from source, the enclosing workspace must be sourced for ROS to be able to find the package’s contents.
source install/setup.bash
Run Launch File for Robot Segmentation
Continuing inside the docker container, run the following launch file to spin up a demonstration of robot segmentation for a single camera using the rosbag:
ros2 launch isaac_ros_cumotion robot_segmentation.launch.py
Open a second terminal inside the Docker container:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Run the rosbag file to simulate an image stream:
ros2 bag play --clock -l ${ISAAC_ROS_WS}/isaac_ros_assets/r2b_2024/r2b_robotarm \ --remap /camera_1/aligned_depth_to_color/image_raw:=/depth_image \ camera_1/color/camera_info:=rgb/camera_info
Open a third terminal inside the Docker container:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Run a static transform publisher, used by the segmentation node to determine the relative pose of the robot base:
ros2 run tf2_ros static_transform_publisher --frame-id ar_tag --child-frame-id base_link --x -0.30 --y 0.47 --z 0.0 --qx 0 --qy 0 --qz 0.707 --qw 0.707
Visualize Results of Robot Segmentation
Open a new terminal inside the Docker container:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Visualize the robot mask in
RViz
:rviz2
Then click on the
Add
button and selectBy topic
. In theBy topic
window, select the topic/cumotion/camera_1/robot_mask
.Optionally, you can also add the camera image to the RViz window by selecting the topic
/camera_1/color/image_raw
. This will allow you to confirm that the robot mask correctly matches the camera image.Note
Due to initial warm-up time, the visualization may take up to 1 minute to appear. During this time, the depth mask may stutter or lag behind the camera image.
Troubleshooting
Isaac ROS Troubleshooting
For solutions to problems with Isaac ROS, see troubleshooting.
API
CumotionActionServer
ROS Parameters
ROS Parameter |
Type |
Default |
Description |
---|---|---|---|
|
|
|
Path to the XRDF file for the robot |
|
|
|
Path to the robot’s URDF |
|
|
|
Speed scaling factor for the planner |
|
|
|
Number of mesh objects to pre-allocate in the cuMotion world |
|
|
|
Number of cuboid objects to pre-allocate in the cuMotion world |
|
|
|
Fixed time step (in seconds) used for interpolating the trajectory |
|
|
|
Maximum dimensions of the voxels in meters |
|
|
|
Size of a voxel in meters |
|
|
|
When true, indicates that cuMotion should read a Euclidean signed distance field (ESDF) as part of its world |
|
|
|
When true, indicates that cuMotion should publish its world representation |
|
|
|
When true, indicates that cuMotion should add an implicit ground plane |
|
|
|
Voxel size to use when publishing cuMotion’s world |
|
|
|
Maximum number of voxels that may be used to publish cuMotion’s world |
|
|
|
Topic for reading the robot’s joint state |
|
|
|
Tool frame of the robot that should be used for planning |
|
|
|
Grid position at which to place cuMotion’s world |
|
|
|
Service to call when querying the ESDF world |
|
|
|
When true, cuMotion’s backend planning library (currently cuRobo) will log additional debug messages |
|
|
|
When true, the planner is allowed to override MoveIt’s scaling factors |
ROS Topics Subscribed
ROS Topic |
Interface |
Description |
---|---|---|
|
Joint states of the robot |
ROS Topics Published
ROS Topic |
Interface |
Description |
---|---|---|
|
cuMotion’s world represented as voxels for visualization |
ROS Services Requested
ROS Service |
Interface |
Description |
---|---|---|
|
Service that takes an axis-aligned bounding box (AABB) for the planning region and returns a dense ESDF and gradient field for that region. |
ROS Actions Advertised
ROS Action |
Interface |
Description |
---|---|---|
|
Creates a motion plan and forwards it to MoveIt |
CumotionRobotSegmenter
ROS Parameters
ROS Parameter |
Type |
Default |
Description |
---|---|---|---|
|
|
|
Path to the XRDF file for the robot |
|
|
|
Path to the robot’s URDF |
|
|
|
CUDA device index to use |
|
|
|
Maximum distance from a given collision sphere (in meters) at which to mask points |
|
|
|
Maximum allowed delay (in seconds) for which depth image and joint state messages are considered synchronized |
|
|
|
Maximum duration (in seconds) for which to block while waiting for a transform to become available |
|
|
|
Topic to subscribe to for the robot’s joint states |
|
|
|
Topic on which to publish the spheres representing the robot |
|
|
|
List of topics to subscribe to for input depth images |
|
|
|
List of topics to subscribe to for camera info corresponding to the input depth image streams |
|
|
|
List of topics on which to publish the robot segmentation masks for the corresponding depth image streams |
|
|
|
List of topics on which to publish depth images with the robot segmented out |
|
|
|
When true, increases logging verbosity to help with debugging |
ROS Topics Subscribed
ROS Topic |
Interface |
Description |
---|---|---|
|
Joint states of the robot |
|
|
Input depth images, including the robot to be segmented out |
|
|
Topics to subscribe to for the camera info corresponding to the input depth image streams |
ROS Topics Published
ROS Topic |
Interface |
Description |
---|---|---|
|
Robot represented by spheres for debugging and visualization |
|
|
Robot segmentation masks for the corresponding depth image streams |
|
|
Depth images with the robot segmented out |