Tutorial: Localization with cuVGL#

After you have created a cuVGL map with Tutorial: cuVGL Map Creation, you can use it for localization.

Use Global Localization Node in ROS 2#

Follow the steps to Set Up the Development Environment and Install or Build isaac_ros_visual_global_localization before running the following commands.

Export TensorRT Engine Files#

Note

This is a one time setup. You can export the engine files for a feature type and then use the same engine files for all the subsequent runs.

OUTPUT_MODEL_DIR="path_to_output_model_dir"

$(ros2 pkg prefix isaac_ros_visual_mapping)/bin/visual_mapping/export_lightglue_engine \
    --worker_config_file $(ros2 pkg prefix --share isaac_ros_visual_mapping)/configs/isaac/matching_task_worker_config.pb.txt \
    --model_dir $(ros2 pkg prefix --share isaac_ros_visual_mapping)/models \
    --output_model_dir $OUTPUT_MODEL_DIR

$(ros2 pkg prefix isaac_ros_visual_mapping)/bin/visual_mapping/export_extractor_engine \
    --configure_file $(ros2 pkg prefix --share isaac_ros_visual_mapping)/configs/isaac/keypoint_creation_config.pb.txt \
    --model_dir $(ros2 pkg prefix --share isaac_ros_visual_mapping)/models \
    --output_model_dir $OUTPUT_MODEL_DIR

Live Localization in ROS 2#

Use the following command to launch the localization node:

  1. Create a map by following Tutorial: cuVGL Map Creation.

  2. Launch container and go to container’s terminal.

  3. Inside container, export TensorRT engine files as described in Export TensorRT Engine Files.

  4. Inside container, run the following command to launch the localization node:

    ros2 launch isaac_ros_visual_global_localization isaac_ros_visual_global_localization_node.launch.py \
       camera_names:='front_stereo_camera,left_stereo_camera,right_stereo_camera,back_stereo_camera' \
       vgl_map_dir:=${MAP_DIR}/cuvgl_map \
       vgl_model_dir:=$OUTPUT_MODEL_DIR
    

More parameters are available in the include/visual_global_localization.launch.py file. You can also directly include the visual_global_localization.launch.py in your launch file.

Localization with Rosbag Replay#

We provide an end-to-end ROS launch file for replaying a rosbag and running global localization. It continuously runs global localization on incoming frames, even if a localization pose has already been computed. It also activates the 3D Lidar to create a point cloud overlay on the occupancy map using the result poses.

  1. Create a map by following Tutorial: cuVGL Map Creation.

  2. Record a localization rosbag using appropriate data recording tools. Please keep the localization trajectory within 1m from the map trajectory’s coverage.

  3. Inside container, export TensorRT engine files as described in Export TensorRT Engine Files.

  4. Inside container, install RViz:

    sudo apt-get install -y ros-jazzy-rviz2
    source /opt/ros/jazzy/setup.bash
    
  5. Inside container, run the following command to replay the localization rosbag and localize the robot:

    ros2 launch isaac_ros_visual_global_localization run_cuvgl_rosbag_replay.launch.py \
       rosbag_path:=$ROS_BAG_PATH \
       camera_names:='front_stereo_camera,left_stereo_camera,right_stereo_camera,back_stereo_camera' \
       vgl_map_dir:=${MAP_DIR}/cuvgl_map \
       occupancy_map_yaml_file:=${MAP_DIR}/occupancy_map.yaml \
       vgl_model_dir:=$OUTPUT_MODEL_DIR
    

Visualize Results#

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/release-4.1/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_mapping_and_localization/visualize_vgl_pose.png/

Visualize the /map, /front_3d_lidar/lidar_points_filtered, and /visual_localization/pose topics in RViz or Foxglove. The alignment between the point cloud and the occupancy map indicates the accuracy of the result pose, which is used to transform the point cloud into the occupancy map frame. The axes shows the localization pose, with the red axes pointing in the the robot’s forward direction.

Coordinate Frames#

This section describes the coordinate frames that are involved in the VisualGlobalLocalizationNode. The frames discussed below are oriented as follows:

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/release-4.1/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_visual_slam/Axes.png/
  • base_frame: The base frame of the robot or camera rig. It is assumed that all cameras are rigidly mounted to this frame, such that the transforms between the base_frame and the camera_optical_frames can assumed to be static. The estimated odometry and SLAM pose will represent the pose of this frame.

  • map_frame: The frame corresponding to the origin of the map that cuVGL creates or localizes in. The SLAM pose reported by cuVGL corresponds with the transform map_frame -> base_frame.

  • camera_optical_frames: The frames corresponding to the optical center of every camera. The frame is expected to lie at the center of the camera lens and oriented with the z-axis pointing in the view direction and the y-axis pointing to the floor.

Troubleshooting#

  • If localization frequently fails to return a pose, consider relaxing the rejection criteria, allowing it to return less confident results, by setting: vgl_localization_precision_level:=0.

  • To print more debug messages from the localization node, use launch parameters: vgl_verbose_logging:=True vgl_init_glog:=True vgl_glog_v:=1.

Use cuVGL in C++#

See the cuVGL section in isaac_ros_visual_mapping package for C++ API.

Source code available on GitHub.