Manipulation Workflow Configuration Guide#
This is a quick reference for configuring the manipulation workflow YAML files used with workflows.launch.py. For a complete parameters reference, refer to the Create Manipulator Configuration File section.
Note
Sample Configuration Files: You can find examples in the isaac_manipulator_bringup package’s params directory. These sample files contain valid parameter combinations for different robot setups:
Required Parameters#
Note
Editing Configuration Files:
Binary (Debian) Installation: Configuration files are read-only. Copy reference files before editing.
Source Installation: Edit in place or copy to a separate location.
Refer to Configure Your Workflow in the Pick and Place Tutorial for complete copy-and-edit instructions.
These parameters must be explicitly set in your configuration file:
Parameter |
Options |
Description |
|---|---|---|
|
|
Defines the manipulation workflow to execute |
|
|
Selects camera driver and configuration |
Configuration Example#
# Required Parameters (defaults shown)
workflow_type: PICK_AND_PLACE
camera_type: REALSENSE
Optional Parameters#
Perception workflow#
Configure the perception workflow, which provides depth estimation and object detection, segmentation, and pose estimation capabilities. Defaults are provided but can be customized:
Parameter |
Options |
Default |
Description |
|---|---|---|---|
|
|
|
Depth estimation method. Actual depth source varies by camera type and configuration. See Depth Configuration Compatibility for supported combinations |
|
|
|
Object detection network for locating objects in the scene |
|
|
|
Segmentation network for extracting object masks |
|
|
|
6DOF pose estimation network for determining object poses |
Configuration Example#
# Perception Workflow (optional - defaults shown)
object_detection_type: RTDETR
segmentation_type: NONE
pose_estimation_type: FOUNDATION_POSE
depth_type: ESS_FULL
Supported Model Combinations#
Not all model combinations are compatible. The following table lists valid combinations for the perception workflow:
Object Detection Type |
Segmentation Type |
Pose Estimation Type |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Depth Configuration Compatibility#
The actual depth source depends on your camera type and configuration. Each camera type supports different depth methods:
RealSense Cameras
Depth Type |
|
Actual Depth Source |
|---|---|---|
Any depth type |
|
Native RealSense depth (overrides |
|
|
ESS Full DNN depth |
|
|
ESS Light DNN depth |
|
|
Foundation Stereo DNN depth |
|
Any value |
Simulation depth |
Isaac Sim Cameras
Supported Depth Type |
Actual Depth Source |
|---|---|
|
Simulation depth |
Note
Key Configuration Rules:
RealSense cameras have native depth capability. Set
enable_dnn_depth_in_realsense: falseto use native depth, ortrueto use DNN-based depth estimation.Isaac Sim cameras provide simulation depth directly and only support
ISAAC_SIMdepth type. Theenable_dnn_depth_in_realsenseparameter is ignored.When using RealSense with
enable_dnn_depth_in_realsense: false, thedepth_typesetting is overridden and native RealSense depth is used regardless.DNN-based depth methods (
ESS_FULL,ESS_LIGHT,FOUNDATION_STEREO) provide better quality but require additional computational resources.
Unsupported Combinations:
Isaac Sim cameras cannot use
ESS_FULL,ESS_LIGHT,FOUNDATION_STEREO, orREALSENSEdepth types
Workspace Configuration#
Parameter |
Default |
Description |
|---|---|---|
|
None |
File path to a |
Configuration Example#
# Static Obstacles (for collision avoidance)
moveit_collision_objects_scene_file: /path/to/scene.scene
Warning
Include any workspace obstacles not visible to cameras to prevent collisions. Refer to cuMotion docs for creating .scene files.
Advanced Parameters#
These parameters are only needed in specific scenarios:
Parameter |
When Required |
Default |
Description |
|---|---|---|---|
|
For robot segmentation synchronization |
|
Synchronization threshold to sync joint states and depth images for robot segmentation to produce segmented depth. Small values prevent synchronization and large values cause segmented images that don’t accurately capture the robot’s state. |
|
To reduce system load or disable collision avoidance |
|
Enable/disable nvblox for 3D scene reconstruction and dynamic collision avoidance. Setting to |
|
For neural network-based depth estimation with RealSense cameras |
|
Enable DNN-based depth estimation for RealSense cameras. See Depth Configuration Compatibility for camera-specific behavior and supported combinations |
Configuration Example#
# Advanced Parameters
time_sync_slop: 0.1 # Sync tolerance for joint states and depth images
enable_nvblox: false # Reduce system load
enable_dnn_depth_in_realsense: true # Better depth quality for RealSense
Note
FoundationStereo Timing Consideration: FoundationStereo can take around 2 seconds to generate a depth image. To synchronize depth images and joint states from the robot driver, use a time_sync_slop value close to 3 seconds for proper synchronization.
System Performance: Disabling enable_nvblox reduces computational load but eliminates dynamic collision avoidance and robot segmentation capabilities.
Usage#
ros2 launch isaac_manipulator_bringup workflows.launch.py \
manipulator_workflow_config:=/path/to/your/config.yaml
More Information#
isaac_manipulator_bringup - Launch file parameter details