isaac_ros_visual_slam
Source code on GitHub.
Quickstart
Set Up Development Environment
Set up your development environment by following the instructions in getting started.
Clone
isaac_ros_common
under${ISAAC_ROS_WS}/src
.cd ${ISAAC_ROS_WS}/src && \ git clone -b release-3.1 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
(Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.
Warning
We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.
Download Quickstart Assets
Download quickstart data from NGC:
Make sure required libraries are installed.
sudo apt-get install -y curl jq tar
Then, run these commands to download the asset from NGC:
NGC_ORG="nvidia" NGC_TEAM="isaac" PACKAGE_NAME="isaac_ros_visual_slam" NGC_RESOURCE="isaac_ros_visual_slam_assets" NGC_FILENAME="quickstart.tar.gz" MAJOR_VERSION=3 MINOR_VERSION=1 VERSION_REQ_URL="https://catalog.ngc.nvidia.com/api/resources/versions?orgName=$NGC_ORG&teamName=$NGC_TEAM&name=$NGC_RESOURCE&isPublic=true&pageNumber=0&pageSize=100&sortOrder=CREATED_DATE_DESC" AVAILABLE_VERSIONS=$(curl -s \ -H "Accept: application/json" "$VERSION_REQ_URL") LATEST_VERSION_ID=$(echo $AVAILABLE_VERSIONS | jq -r " .recipeVersions[] | .versionId as \$v | \$v | select(test(\"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\")) | split(\".\") | {major: .[0]|tonumber, minor: .[1]|tonumber, patch: .[2]|tonumber} | select(.major == $MAJOR_VERSION and .minor <= $MINOR_VERSION) | \$v " | sort -V | tail -n 1 ) if [ -z "$LATEST_VERSION_ID" ]; then echo "No corresponding version found for Isaac ROS $MAJOR_VERSION.$MINOR_VERSION" echo "Found versions:" echo $AVAILABLE_VERSIONS | jq -r '.recipeVersions[].versionId' else mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets && \ FILE_REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/\ versions/$LATEST_VERSION_ID/files/$NGC_FILENAME" && \ curl -LO --request GET "${FILE_REQ_URL}" && \ tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets && \ rm ${NGC_FILENAME} fi
Build isaac_ros_visual_slam
Launch the Docker container using the
run_dev.sh
script:cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Install the prebuilt Debian package:
sudo apt-get install -y ros-humble-isaac-ros-visual-slam
Clone this repository under
${ISAAC_ROS_WS}/src
:cd ${ISAAC_ROS_WS}/src && \ git clone -b release-3.1 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam.git isaac_ros_visual_slam
Launch the Docker container using the
run_dev.sh
script:cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Use
rosdep
to install the package’s dependencies:rosdep install --from-paths ${ISAAC_ROS_WS}/src/isaac_ros_visual_slam --ignore-src -y
Build the package from source:
cd ${ISAAC_ROS_WS}/ && \ colcon build --symlink-install --packages-up-to isaac_ros_visual_slam
Source the ROS workspace:
Note
Make sure to repeat this step in every terminal created inside the Docker container.
Because this package was built from source, the enclosing workspace must be sourced for ROS to be able to find the package’s contents.
source install/setup.bash
Run Launch File
Continuing inside the Docker container, install the following dependencies:
sudo apt-get install -y ros-humble-isaac-ros-examples
Run the following launch file to spin up a demo of this package using the quickstart rosbag:
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=visual_slam \ interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_visual_slam/quickstart_interface_specs.json \ rectified_images:=false
Open a second terminal inside the Docker container:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Run the rosbag file to simulate an image stream:
ros2 bag play ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_visual_slam/quickstart_bag --remap \ /front_stereo_camera/left/image_raw:=/left/image_rect \ /front_stereo_camera/left/camera_info:=/left/camera_info_rect \ /front_stereo_camera/right/image_raw:=/right/image_rect \ /front_stereo_camera/right/camera_info:=/right/camera_info_rect \ /back_stereo_camera/left/image_raw:=/rear_left/image_rect \ /back_stereo_camera/left/camera_info:=/rear_left/camera_info_rect \ /back_stereo_camera/right/image_raw:=/rear_right/image_rect \ /back_stereo_camera/right/camera_info:=/rear_right/camera_info_rect
Ensure that you have already set up your RealSense camera using the RealSense setup tutorial. If you have not, please set up the sensor and then restart this quickstart from the beginning.
Continuing inside the Docker container, install the following dependencies:
sudo apt-get install -y ros-humble-isaac-ros-examples ros-humble-isaac-ros-realsense
Run the following launch file to spin up a demo of this package using a RealSense camera:
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=realsense_stereo_rect,visual_slam \ interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_visual_slam/quickstart_interface_specs.json \ base_frame:=camera_link camera_optical_frames:="['camera_infra1_optical_frame', 'camera_infra2_optical_frame']"
Ensure that you have already set up your Hawk camera using the Hawk setup tutorial. If you have not, please set up the sensor and then restart this quickstart from the beginning.
Continuing inside the Docker container, install the following dependencies:
sudo apt-get install -y ros-humble-isaac-ros-examples ros-humble-isaac-ros-argus-camera
Run the following launch file to spin up a demo of this package using a Hawk camera:
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=argus_stereo,rectify_stereo,visual_slam \ interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_visual_slam/quickstart_interface_specs.json \ base_frame:=camera camera_optical_frames:="['left_cam', 'right_cam']" module_id:=5
Ensure that you have already set up your ZED camera using ZED setup tutorial.
Continuing inside the Docker container, install dependencies:
sudo apt-get install -y ros-humble-isaac-ros-examples ros-humble-isaac-ros-stereo-image-proc ros-humble-isaac-ros-zed
Run the following launch file to spin up a demo of this package using a ZED Camera:
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py \ launch_fragments:=zed_stereo_rect,visual_slam \ base_frame:=zed2_camera_center camera_optical_frames:="['zed2_left_camera_optical_frame', 'zed2_right_camera_optical_frame']" \ interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_visual_slam/zed2_quickstart_interface_specs.json
Note
If you are using the ZED X series, replace zed2_quickstart_interface_specs.json with zedx_quickstart_interface_specs.json in the above command.
Visualize Results
Open a new terminal inside the Docker container:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Visualize and validate the output of the package with
rviz
:rviz2 -d $(ros2 pkg prefix isaac_ros_visual_slam --share)/rviz/default.cfg.rviz
RViz should start displaying the point clouds and poses like below:
Try More Examples
To continue your exploration, check out the following suggested examples:
Coordinate Frames
This section describes the coordinate frames that are involved in the
VisualSlamNode
. The frames discussed below are oriented as follows:
base_frame
: The base frame of the robot or camera rig. It is assumed that all cameras are rigidly mounted to this frame, such that the transforms between thebase_frame
and thecamera_optical_frames
can assumed to be static. The estimated odometry and SLAM pose will represent the pose of this frame.map_frame
: The frame corresponding to the origin of the map that cuVSLAM creates or localizes in. The SLAM pose reported by cuVSLAM corresponds with the transformmap_frame
->base_frame
.odom_frame
: The frame corresponding to the origin of the odometry that cuVSLAM estimates. The odometry pose reported by cuVSLAM corresponds with the transformodom_frame
->base_frame
.camera_optical_frames
: The frames corresponding to the optical center of every camera. The frame is expected to lie at the center of the camera lens and oriented with the z-axis pointing in the view direction and the y-axis pointing to the floor.
Troubleshooting
Isaac ROS Troubleshooting
For solutions to problems with Isaac ROS, please check here.
Troubleshooting Suggestions
If the target frames in RViz are being updated at a lower rate than the input image rate:
Check if the input image frame rate is equal to the value set in the launch file by opening a new terminal and running the command
ros2 topic hz --window 100 <image_topic_name>
If RViz is not showing the poses, check the Fixed Frame value.
If you are seeing
Visual tracking is lost.
messages frequently, it could be caused by the following issues:Fast motion causing the motion blur in the frames
Low-lighting conditions.
The wrong
camera_info
is being published.
For better performance:
Increase the capture framerate from the camera to yield a better tracking result.
If input images are noisy, you can use the
denoise_input_images
flag in the node.
For RealSense cameras check that:
You are using the firmware and
realsense_ros
versions recommended in RealSense setup tutorial.You are using the parameters set in the RealSense tutorial launch file
If the pose estimate frames in RViz is drifting from actual real-world poses and you see white dots on nearby objects(refer to the screenshot below), this means that the emitter of the RealSense camera is on and it needs to be turned off.
Left and right images being published are not compressed or encoded. Isaac ROS Visual Slam expects raw images.
For global localization:
Isaac ROS Visual SLAM does not yet support global localization. Simultaneous localization and mapping mode is only supported for single cameras, for multi-camera setups only visual odometry mode is supported.
If localization with respect to a map is needed an additional module is required. We recommend to use ROS2 AMCL together with Isaac ROS Map Localization for global localization. You can find an example of how to do this in the Nova Carter Navigation tutorial.
API
Usage
ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam.launch.py
ROS Parameters
ROS Parameter |
Type |
Default |
Description |
---|---|---|---|
Functional Parameters |
|||
|
|
|
Number of cameras used. Set to 2 for a single stereo camera. Maximum value is 32. |
|
|
|
Minimum number of images required to track the pose. If less than |
|
|
|
Maximum duration between image timestamps to still consider them synced. |
|
|
|
If enabled, input images are denoised. It can be enabled when images are noisy because of low-light conditions. |
|
|
|
Flag to mark if the incoming images are rectified or raw. |
|
|
|
If enabled, slam poses will be modified such that the camera moves on a horizontal plane. |
|
|
|
If enabled, SLAM mode is on. If disabled, only odometry is on. Currently SLAM mode only works when using a single stereo camera. |
|
|
|
If enabled, IMU data is used. |
|
|
|
White noise parameter for gyroscope based on IMU Noise Model. |
|
|
|
Random walk parameter for gyroscope based on IMU Noise Model. |
|
|
|
White noise parameter for accelerometer based on IMU Noise Model. |
|
|
|
Random walk parameter for accelerometer based on IMU Noise Model. |
|
|
|
IMU frequency used for generating noise model parameters. |
|
|
|
Acceptable time delta between consecutive synced image messages. If the messages exceed the threshold, it might be an indication of message drop from the source. |
|
|
|
Acceptable time delta between consecutive IMU messages. If the messages exceed the threshold, it might be an indication of message drop from the source. |
|
|
|
The number of pixels of the upper border of the image will be masked at the stage of selecting features for visual tracking. |
|
|
|
The number of pixels of the lower border of the image will be masked at the stage of selecting features for tracking. |
|
|
|
The number of pixels of the left border of the image from the left camera of the stereo pair (the right border for the right camera of a stereo pair) will be masked at the stage of selecting features for tracking. |
|
|
|
The number of pixels of the right border of the image from the left camera of the stereo pair (the left border for the right camera of a stereo pair) will be masked at the stage of selecting features for tracking. |
Frame Parameters |
|||
|
|
|
The frame name associated with the map origin. |
|
|
|
The frame name associated with the odometry origin. |
|
|
|
The frame name associated with the base of the robot or camera rig. |
|
|
|
The frame names associated with the cameras’ optical frames. Using ROS optical frame conventions, i.e. z-axis front, y-axis down. If not specified will use the frames from the camera info messages. |
|
|
|
The frame name associated with the IMU. |
Message Parameters |
|||
|
|
|
Buffer size used by the image synchronizer. |
|
|
|
Buffer size used by the IMU sequencer. |
|
|
|
QoS profile for the image subscribers. |
|
|
|
QoS profile for the IMU subscriber. |
Output Parameters |
|||
|
|
|
Override timestamp received from the left image with the timestamp from |
|
|
|
Enable |
|
|
|
Enable |
|
|
|
Invert the |
|
|
|
Invert the |
Debug/Visualization Parameters |
|||
|
|
|
Main flag to enable or disable visualization. |
|
|
|
If enabled, 2D feature pointcloud will be available for visualization. |
|
|
|
If enabled, landmark pointcloud will be available for visualization. |
|
|
|
The maximum size of the buffer for pose trail visualization. |
|
|
|
Verbosity for cuVSLAM, 0 is silent. |
|
|
|
If enabled, a debug dump (images, IMU messages, timestamps, and camera info) is saved on to the disk at the path indicated by |
|
|
|
The path to the directory to store the debug dump data. |
ROS Topics Subscribed
ROS Topic |
Interface |
Description |
---|---|---|
|
Image from the i-th camera in grayscale (i is in range [0, |
|
|
Camera info from the i-th camera (i is in range [0, |
|
|
Sensor data from the IMU (is only present if |
ROS Topics Published
ROS Topic |
Interface |
Description |
---|---|---|
|
Current odometry of the |
|
|
Current pose with covariance of the |
|
|
Current pose of the |
|
|
Trail of poses generated by SLAM. |
|
|
Trail of poses generated by pure Visual Odometry. |
|
|
Status message for diagnostics. |
|
|
diagnostic_msgs/DiagnosticArray <https://github.com/ros2/common_interfaces/blob/humble/diagnostic_msgs/msg/DiagnosticArray.msg> __ |
Diagnostic message. Similar to |
ROS Services Advertised
ROS Service |
Interface |
Description |
---|---|---|
|
A service to reset the node. |
|
|
A service to get the series of poses for the path traversed. |
|
|
A service to set the estimated SLAM pose of the |
Note
visual_slam/set_slam_pose
only works in SLAM mode i.e. set enable_localization_n_mapping = true
ROS Actions Advertised
ROS Action |
Interface |
Description |
---|---|---|
|
An action to save the landmarks and pose graph into a map and onto the disk. |
|
|
An action to load the map from the disk and localize within it given a prior pose. |