isaac_ros_visual_slam

Source code on GitHub.

Quickstart

Set Up Development Environment

  1. Set up your development environment by following the instructions in getting started.

  2. Clone isaac_ros_common under ${ISAAC_ROS_WS}/src.

    cd ${ISAAC_ROS_WS}/src && \
       git clone -b release-3.1 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
    
  3. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.

    Warning

    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.

Download Quickstart Assets

  1. Download quickstart data from NGC:

    Make sure required libraries are installed.

    sudo apt-get install -y curl jq tar
    

    Then, run these commands to download the asset from NGC:

    NGC_ORG="nvidia"
    NGC_TEAM="isaac"
    PACKAGE_NAME="isaac_ros_visual_slam"
    NGC_RESOURCE="isaac_ros_visual_slam_assets"
    NGC_FILENAME="quickstart.tar.gz"
    MAJOR_VERSION=3
    MINOR_VERSION=1
    VERSION_REQ_URL="https://catalog.ngc.nvidia.com/api/resources/versions?orgName=$NGC_ORG&teamName=$NGC_TEAM&name=$NGC_RESOURCE&isPublic=true&pageNumber=0&pageSize=100&sortOrder=CREATED_DATE_DESC"
    AVAILABLE_VERSIONS=$(curl -s \
        -H "Accept: application/json" "$VERSION_REQ_URL")
    LATEST_VERSION_ID=$(echo $AVAILABLE_VERSIONS | jq -r "
        .recipeVersions[]
        | .versionId as \$v
        | \$v | select(test(\"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\"))
        | split(\".\") | {major: .[0]|tonumber, minor: .[1]|tonumber, patch: .[2]|tonumber}
        | select(.major == $MAJOR_VERSION and .minor <= $MINOR_VERSION)
        | \$v
        " | sort -V | tail -n 1
    )
    if [ -z "$LATEST_VERSION_ID" ]; then
        echo "No corresponding version found for Isaac ROS $MAJOR_VERSION.$MINOR_VERSION"
        echo "Found versions:"
        echo $AVAILABLE_VERSIONS | jq -r '.recipeVersions[].versionId'
    else
        mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets && \
        FILE_REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/\
    versions/$LATEST_VERSION_ID/files/$NGC_FILENAME" && \
        curl -LO --request GET "${FILE_REQ_URL}" && \
        tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets && \
        rm ${NGC_FILENAME}
    fi
    

Build isaac_ros_visual_slam

  1. Launch the Docker container using the run_dev.sh script:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  2. Install the prebuilt Debian package:

    sudo apt-get install -y ros-humble-isaac-ros-visual-slam
    

Run Launch File

  1. Continuing inside the Docker container, install the following dependencies:

    sudo apt-get install -y ros-humble-isaac-ros-examples
    
  2. Run the following launch file to spin up a demo of this package using the quickstart rosbag:

    ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=visual_slam \
    interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_visual_slam/quickstart_interface_specs.json \
    rectified_images:=false
    
  3. Open a second terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  4. Run the rosbag file to simulate an image stream:

    ros2 bag play ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_visual_slam/quickstart_bag --remap  \
    /front_stereo_camera/left/image_raw:=/left/image_rect \
    /front_stereo_camera/left/camera_info:=/left/camera_info_rect \
    /front_stereo_camera/right/image_raw:=/right/image_rect \
    /front_stereo_camera/right/camera_info:=/right/camera_info_rect \
    /back_stereo_camera/left/image_raw:=/rear_left/image_rect \
    /back_stereo_camera/left/camera_info:=/rear_left/camera_info_rect \
    /back_stereo_camera/right/image_raw:=/rear_right/image_rect \
    /back_stereo_camera/right/camera_info:=/rear_right/camera_info_rect
    

Visualize Results

  1. Open a new terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
       ./scripts/run_dev.sh
    
  2. Visualize and validate the output of the package with rviz:

    rviz2 -d $(ros2 pkg prefix isaac_ros_visual_slam --share)/rviz/default.cfg.rviz
    

    RViz should start displaying the point clouds and poses like below:

    https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_visual_slam/Rviz_quick_start.png/

Try More Examples

To continue your exploration, check out the following suggested examples:

Coordinate Frames

This section describes the coordinate frames that are involved in the VisualSlamNode. The frames discussed below are oriented as follows:

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_visual_slam/Axes.png/
  1. base_frame: The base frame of the robot or camera rig. It is assumed that all cameras are rigidly mounted to this frame, such that the transforms between the base_frame and the camera_optical_frames can assumed to be static. The estimated odometry and SLAM pose will represent the pose of this frame.

  2. map_frame: The frame corresponding to the origin of the map that cuVSLAM creates or localizes in. The SLAM pose reported by cuVSLAM corresponds with the transform map_frame -> base_frame.

  3. odom_frame: The frame corresponding to the origin of the odometry that cuVSLAM estimates. The odometry pose reported by cuVSLAM corresponds with the transform odom_frame -> base_frame.

  4. camera_optical_frames: The frames corresponding to the optical center of every camera. The frame is expected to lie at the center of the camera lens and oriented with the z-axis pointing in the view direction and the y-axis pointing to the floor.

Troubleshooting

Isaac ROS Troubleshooting

For solutions to problems with Isaac ROS, please check here.

Troubleshooting Suggestions

  1. If the target frames in RViz are being updated at a lower rate than the input image rate:

    1. Check if the input image frame rate is equal to the value set in the launch file by opening a new terminal and running the command

      ros2 topic hz --window 100 <image_topic_name>
      
  2. If RViz is not showing the poses, check the Fixed Frame value.

  3. If you are seeing Visual tracking is lost. messages frequently, it could be caused by the following issues:

    1. Fast motion causing the motion blur in the frames

    2. Low-lighting conditions.

    3. The wrong camera_info is being published.

  4. For better performance:

    1. Increase the capture framerate from the camera to yield a better tracking result.

    2. If input images are noisy, you can use the denoise_input_images flag in the node.

  5. For RealSense cameras check that:

    1. You are using the firmware and realsense_ros versions recommended in RealSense setup tutorial.

    2. You are using the parameters set in the RealSense tutorial launch file

    3. If the pose estimate frames in RViz is drifting from actual real-world poses and you see white dots on nearby objects(refer to the screenshot below), this means that the emitter of the RealSense camera is on and it needs to be turned off.

    4. Left and right images being published are not compressed or encoded. Isaac ROS Visual Slam expects raw images.

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_visual_slam/realsense_emitter.png/
  1. For global localization:

    Isaac ROS Visual SLAM does not yet support global localization. Simultaneous localization and mapping mode is only supported for single cameras, for multi-camera setups only visual odometry mode is supported.

    If localization with respect to a map is needed an additional module is required. We recommend to use ROS2 AMCL together with Isaac ROS Map Localization for global localization. You can find an example of how to do this in the Nova Carter Navigation tutorial.

API

Usage

ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam.launch.py

ROS Parameters

ROS Parameter

Type

Default

Description

Functional Parameters

num_cameras

int

2

Number of cameras used. Set to 2 for a single stereo camera. Maximum value is 32.

min_num_images

int

num_cameras

Minimum number of images required to track the pose. If less than min_num_images with a synchronized timestamp are received then the synchronizer will drop all messages.

sync_matching_threshold_ms

double

5.0 .

Maximum duration between image timestamps to still consider them synced.

enable_image_denoising

bool

false

If enabled, input images are denoised. It can be enabled when images are noisy because of low-light conditions.

rectified_images

bool

true

Flag to mark if the incoming images are rectified or raw.

enable_planar_mode

bool

false

If enabled, slam poses will be modified such that the camera moves on a horizontal plane.

enable_localization_n_mapping

bool

true

If enabled, SLAM mode is on. If disabled, only odometry is on. Currently SLAM mode only works when using a single stereo camera.

enable_imu_fusion

bool

false

If enabled, IMU data is used.

gyro_noise_density

double

0.000244

White noise parameter for gyroscope based on IMU Noise Model.

gyro_random_walk

double

0.000019393

Random walk parameter for gyroscope based on IMU Noise Model.

accel_noise_density

double

0.001862

White noise parameter for accelerometer based on IMU Noise Model.

accel_random_walk

double

0.003

Random walk parameter for accelerometer based on IMU Noise Model.

calibration_frequency

double

200.0

IMU frequency used for generating noise model parameters.

image_jitter_threshold_ms

double

34.0

Acceptable time delta between consecutive synced image messages. If the messages exceed the threshold, it might be an indication of message drop from the source.

imu_jitter_threshold_ms

double

10.0

Acceptable time delta between consecutive IMU messages. If the messages exceed the threshold, it might be an indication of message drop from the source.

img_mask_top

int

0

The number of pixels of the upper border of the image will be masked at the stage of selecting features for visual tracking.

img_mask_bottom

int

0

The number of pixels of the lower border of the image will be masked at the stage of selecting features for tracking.

img_mask_left

int

0

The number of pixels of the left border of the image from the left camera of the stereo pair (the right border for the right camera of a stereo pair) will be masked at the stage of selecting features for tracking.

img_mask_right

int

0

The number of pixels of the right border of the image from the left camera of the stereo pair (the left border for the right camera of a stereo pair) will be masked at the stage of selecting features for tracking.

Frame Parameters

map_frame

std::string

map

The frame name associated with the map origin.

odom_frame

std::string

odom

The frame name associated with the odometry origin.

base_frame

std::string

base_link

The frame name associated with the base of the robot or camera rig.

camera_optical_frames

std::vector<std::string>

[]

The frame names associated with the cameras’ optical frames. Using ROS optical frame conventions, i.e. z-axis front, y-axis down. If not specified will use the frames from the camera info messages.

imu_frame

std::string

imu

The frame name associated with the IMU.

Message Parameters

image_buffer_size

int

100

Buffer size used by the image synchronizer.

imu_buffer_size

int

50

Buffer size used by the IMU sequencer.

image_qos

std::string

DEFAULT

QoS profile for the image subscribers.

imu_qos

std::string

DEFAULT

QoS profile for the IMU subscriber.

Output Parameters

override_publishing_stamp

bool

false

Override timestamp received from the left image with the timestamp from rclcpp::Clock.

publish_map_to_odom_tf

bool

true

Enable tf broadcaster for map_frame -> odom_frame transform.

publish_odom_to_rig_tf

bool

true

Enable tf broadcaster for odom_frame -> base_frame transform.

invert_map_to_odom_tf

bool

false

Invert the map_frame -> odom_frame transform before broadcasting to the tf tree.

invert_odom_to_rig_tf

bool

false

Invert the odom_frame -> base_frame transform before broadcasting to the tf tree.

Debug/Visualization Parameters

enable_slam_visualization

bool

false

Main flag to enable or disable visualization.

enable_observations_view

bool

false

If enabled, 2D feature pointcloud will be available for visualization.

enable_landmarks_view

bool

false

If enabled, landmark pointcloud will be available for visualization.

path_max_size

int

1024

The maximum size of the buffer for pose trail visualization.

verbosity

int

0

Verbosity for cuVSLAM, 0 is silent.

enable_debug_mode

bool

false

If enabled, a debug dump (images, IMU messages, timestamps, and camera info) is saved on to the disk at the path indicated by debug_dump_path

debug_dump_path

std::string

/tmp/cuvslam

The path to the directory to store the debug dump data.

ROS Topics Subscribed

ROS Topic

Interface

Description

visual_slam/image_{i}

sensor_msgs/Image

Image from the i-th camera in grayscale (i is in range [0, num_cameras-1]).

visual_slam/camera_info_{i}

sensor_msgs/CameraInfo

Camera info from the i-th camera (i is in range [0, num_cameras-1]).

visual_slam/imu

sensor_msgs/Imu

Sensor data from the IMU (is only present if enable_imu_fusion is true).

ROS Topics Published

ROS Topic

Interface

Description

visual_slam/tracking/odometry

nav_msgs/Odometry

Current odometry of the base_frame.

visual_slam/tracking/vo_pose_covariance

geometry_msgs/PoseWithCovarianceStamped

Current pose with covariance of the base_frame.

visual_slam/tracking/vo_pose

geometry_msgs/PoseStamped

Current pose of the base_frame.

visual_slam/tracking/slam_path

nav_msgs/Path

Trail of poses generated by SLAM.

visual_slam/tracking/vo_path

nav_msgs/Path

Trail of poses generated by pure Visual Odometry.

visual_slam/status

isaac_ros_visual_slam_interfaces/VisualSlamStatus

Status message for diagnostics.

/diagnostics

diagnostic_msgs/DiagnosticArray <https://github.com/ros2/common_interfaces/blob/humble/diagnostic_msgs/msg/DiagnosticArray.msg> __

Diagnostic message. Similar to status topic but in a different format.

ROS Services Advertised

ROS Service

Interface

Description

visual_slam/reset

isaac_ros_visual_slam_interfaces/Reset

A service to reset the node.

visual_slam/get_all_poses

isaac_ros_visual_slam_interfaces/GetAllPoses

A service to get the series of poses for the path traversed.

visual_slam/set_slam_pose

isaac_ros_visual_slam_interfaces/SetSlamPose

A service to set the estimated SLAM pose of the base_frame.

Note

visual_slam/set_slam_pose only works in SLAM mode i.e. set enable_localization_n_mapping = true

ROS Actions Advertised

ROS Action

Interface

Description

visual_slam/save_map

isaac_ros_visual_slam_interfaces/SaveMap

An action to save the landmarks and pose graph into a map and onto the disk.

visual_slam/load_map_and_localize

isaac_ros_visual_slam_interfaces/LoadMapAndLocalize

An action to load the map from the disk and localize within it given a prior pose.