Tutorial: Mapping and Localization with Isaac Perceptor
This tutorial describes how to use the offline mapping capabilities of Isaac Perceptor including visual global (cuVGL) localization map creation, cuVSLAM map creation, and occupancy grid map creation, and how to run Isaac Perceptor with localization in the map.
Map Types
Isaac Perceptor is able to produce a few different types of maps when provided
with rosbags. The following list is the set of maps that are generated by the
create_map_offline.py
script shown below.
cuVSLAM Map
cuVSLAM is able to create a map and load a saved map of an environment and use that map to improve localization performance and maintain a consistent global frame of reference.
Visual Global Localization Map
During startup or after losing localization, visual global localization can be used to globally localize on a previously created map and operate in a shared global frame of reference relative.
For more info about visual global localization, see: Concepts: Visual Global Localization
Occupancy Grid Map
Navigation packages such as nav2 use an occupancy grid map in order represent the obstacles in a map. Isaac Perceptor uses cuVSLAM and Nvblox to build an occupancy grid map using cameras.
Note
The generated occupancy map is only intended to be used for global path planning. Specifically it is not intended to be used for 2d lidar localization. The generated map may contain slight artifacts and imprecisions and therefore is expected to be used with a planner that inflates obstacles.
Note
Because the ROS2 rosbag replay is nondeterministic, each run of the occupancy map creation may produce slightly different map.
Data Collection
Complete the instruction in the following doc to run the Nova recorder to record data isaac_ros_nova_recorder Quickstart.
Use nova-carter_hawk-4_no-3d-lidar configuration for data recording with the Nova Carter, which records the front, back, left and right stereo cameras. Use hawk-3.yaml configuration for data recording with the Nova Orin Deverloper Kit, which records the front, left, and right stereo cameras.
Drive the robot around the environment and capture as much of the environment as possible. A vocabulary built with complete features of the environment and environments with varied features are needed for image retrieval. Featureless environments or those with predominantly repetitive structures can result in failed localization. We recommend moving the robot for at least 1 minute within a 5x5m area where it can observe various objects like chairs, tables, shelves, and other office furniture.
To generate a more accurate map, follow the guidelines for the robot’s trajectories during data collection:
Closed Loop Segments: Build mapping trajectories in loops or segments of around 5-10 meters. This ensures the same area is captured from different angles, which improves map detail and reduces drift.
Avoid Long Straight Paths: Long drives in unexplored areas accumulate drift risk. Use short, closed loops instead.
Vary Camera Angles: Capturing locations from multiple angles makes the visual map more reliable for future localization.
Schematic examples of efficient mapping paths in both narrow corridors and open spaces, demonstrating optimal patterns for capturing comprehensive visual data and reducing tracking drift.
Note
To mitigate the effects of tracking drift, Multi-camera SLAM is used. For optimal pose correction and map accuracy, Loop Closing must be effectively exploited. This mechanism corrects the robot’s pose when the robot revisits a previously mapped location.
For localization rosbag collection, make sure to record at least 10s stationary data at the beginning of the rosbag to allow the global localization to initialize correctly. The localization trajectory should be in the mapping area and not deviate too much from the mapping trajectory.
Running the Application
First, SSH into your robot (instructions for the Nova Carter and the Nova Orin Developer Kit). Once you have SSHd into the robot, follow the instructions below to map and localize against maps.
Creating Maps
Warning
Map Creation will use all the CPU and GPU resources, make sure you do not have anything important running at the same time.
Warning
Glass and very thin structures such as wires and cable railings cannot be detected by the current stereo networks used for depth estimation. If you have such structures in your environment, you might need to modify the environment so that there are larger visible features so that unnavigable areas are correctly estimated.
Note
The quality of the occupancy map is dependent on the the disparity data provided to nvblox. Degraded results may occur in wall corners. Improved results may be achieved with alternative stereo modalities, such as a different stereo camera or other AI/DNNs for stereo disparity estimation.
Make sure the rosbag and output map folder path are mounted to the Docker container. You can
add them in ${ISAAC_ROS_WS}/src/isaac_ros_common/scripts/.isaac_ros_dev-dockerargs
then restart
the container.
Install isaac_mapping_ros on your Nova Carter or Nova Orin Developer Kit and create the maps mentioned above using the following commands:
1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Install the required Debian packages:
sudo apt update
sudo apt-get install -y ros-humble-isaac-mapping-ros
source /opt/ros/humble/setup.bash
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-ess-models-install ros-humble-isaac-ros-peoplesemseg-models-install
source /opt/ros/humble/setup.bash
ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula
ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula
ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula
4. Declare ROS_DOMAIN_ID
with the same unique ID (number between 0 and
101) on every bash
instance inside the Docker container:
export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 run isaac_mapping_ros create_map_offline.py --sensor_data_bag=<PATH_TO_BAG> --base_output_folder=<PATH_TO_OUTPUT_FOLDER>
You can monitor the progress of steps, including image extraction, running visual SLAM, depth inference, occupancy map generation and global localization map creation. Upon successful completion, you should see a message confirming that maps have been created.
A good occupancy map features well-aligned objects such as walls and overlapping occupancy when the robot revisits the same locations. Misalignment in the map is typically caused by pose errors. To get a better map, try collecting a rosbag with more small loops in an environment rich with visual features.
Ensure one map creation job runs at a time within a ROS_DOMAIN_ID
. You can run
simultaneous jobs within different ROS_DOMAIN_ID
.
If depth inference step fails or hangs, it’s usually because the previous ESS run didn’t exit cleanly. You can manually kill the ROS process or try to restart the Docker container and run the command again.
Running Perceptor with Localization
Note
When localizing against a created map, distance from the trajectory taken during mapping and the amount of change in the environment will heavily effect the time required and the chance of localizing. It is typically advised to stay within a meter of the trajectory of the initial map. If you are having difficulties localizing against a map and the environment has changed significantly, try remapping the environment.
Note
When enabling SLAM mode, you might see warnings of message drops from time to time. This is due to the extra compute required for localization and should not cause any issues. However, if you start experiencing degraded performance, make sure there is enough compute available for your configuration. For example, if any other processes are running in the background which are using CPU, GPU, or memory bandwidth you might experience reduced localization performance.
After you have created the maps in the tutorial above, you can use the maps for running Perceptor with localization.
First set the two environment variables for the location of the recordings and the map output folder. Modify those two variables based on your setup, for example, on Nova Carter:
export RECORDINGS_FOLDER=/mnt/nova_ssd/recordings
export MAPS_FOLDER=/mnt/nova_ssd/maps
If you have a Nova Carter, install or build nova_carter_bringup and launch the app using the following command:
Pull the Docker image:
docker pull nvcr.io/nvidia/isaac/nova_carter_bringup:release_3.2-aarch64
Run the Docker image:
docker run --privileged --network host \ -v /dev/*:/dev/* \ -v /tmp/argus_socket:/tmp/argus_socket \ -v /etc/nova:/etc/nova \ -v $RECORDINGS_FOLDER:$RECORDINGS_FOLDER -v $MAPS_FOLDER:$MAPS_FOLDER \ nvcr.io/nvidia/isaac/nova_carter_bringup:release_3.2-aarch64 \ ros2 launch nova_carter_bringup perceptor.launch.py stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map/ occupancy_map_yaml_file:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml vslam_enable_slam:=True1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Install the required Debian packages:
sudo apt update sudo apt-get install -y ros-humble-nova-carter-bringup source /opt/ros/humble/setup.bash
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-ess-models-install ros-humble-isaac-ros-peoplesemseg-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula4. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_carter_bringup perceptor.launch.py stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map/ occupancy_map_yaml_file:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml vslam_enable_slam:=True1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Use
rosdep
to install the package’s dependencies:sudo apt update rosdep update rosdep install -i -r --from-paths ${ISAAC_ROS_WS}/src/nova_carter/nova_carter_bringup/ \ --rosdistro humble -y
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-ess-models-install ros-humble-isaac-ros-peoplesemseg-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula
Build the ROS package in the Docker container:
colcon build --symlink-install --packages-up-to nova_carter_bringup source install/setup.bash5. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_carter_bringup perceptor.launch.py stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map/ occupancy_map_yaml_file:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml vslam_enable_slam:=True
The option --disable_vgl
determines if visual global
localization is turned off. The options --vslam_load_map_folder_path
and
--vgl_map_dir
refer to folders created in the above tutorial.
<PATH_TO_MAP_FOLDER>
refers to the option --output_folder
specified in the
map creation tutorial above.
If --occupancy_map_yaml_file
is specified, Nav2’s Map Server
is used to publish the occupancy grid as a nav2::OccupancyGridMap
message.
If you have a Nova Orin Developer Kit, install or build nova_developer_kit_bringup and launch the app using the following command:
Pull the Docker image:
docker pull nvcr.io/nvidia/isaac/nova_developer_kit_bringup:release_3.2-aarch64
Run the Docker image:
docker run --privileged --network host \ -v /dev/*:/dev/* \ -v /tmp/argus_socket:/tmp/argus_socket \ -v /etc/nova:/etc/nova \ -v $RECORDINGS_FOLDER:$RECORDINGS_FOLDER -v $MAPS_FOLDER:$MAPS_FOLDER \ nvcr.io/nvidia/isaac/nova_developer_kit_bringup:release_3.2-aarch64 \ ros2 launch nova_developer_kit_bringup perceptor.launch.py stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map/ occupancy_map_yaml_file:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml vslam_enable_slam:=True1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Install the required Debian packages:
sudo apt update sudo apt-get install -y ros-humble-nova-developer-kit-bringup source /opt/ros/humble/setup.bash
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-ess-models-install ros-humble-isaac-ros-peoplesemseg-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula4. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_developer_kit_bringup perceptor.launch.py stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map/ occupancy_map_yaml_file:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml vslam_enable_slam:=True1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Use
rosdep
to install the package’s dependencies:sudo apt update rosdep update rosdep install -i -r --from-paths ${ISAAC_ROS_WS}/src/nova_developer_kit/nova_developer_kit_bringup/ \ --rosdistro humble -y
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-ess-models-install ros-humble-isaac-ros-peoplesemseg-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula
Build the ROS package in the Docker container:
colcon build --symlink-install --packages-up-to nova_developer_kit_bringup source install/setup.bash5. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_developer_kit_bringup perceptor.launch.py stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map/ occupancy_map_yaml_file:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml vslam_enable_slam:=True
To run localization with a prerecorded rosbag, use mode:=rosbag rosbag:=<BAG_PATH>
to specify
the rosbag path. The command to run it on a Nova Carter is as follows:
Pull the Docker image:
docker pull nvcr.io/nvidia/isaac/nova_carter_bringup:release_3.2-aarch64
Run the Docker image:
docker run --privileged --network host \ -v /dev/*:/dev/* \ -v /tmp/argus_socket:/tmp/argus_socket \ -v /etc/nova:/etc/nova \ -v $RECORDINGS_FOLDER:$RECORDINGS_FOLDER -v $MAPS_FOLDER:$MAPS_FOLDER \ nvcr.io/nvidia/isaac/nova_carter_bringup:release_3.2-aarch64 \ ros2 launch nova_carter_bringup perceptor.launch.py stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map/ occupancy_map_yaml_file:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml vslam_enable_slam:=True mode:=rosbag rosbag:=<BAG_PATH>1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Install the required Debian packages:
sudo apt update sudo apt-get install -y ros-humble-nova-carter-bringup source /opt/ros/humble/setup.bash
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-ess-models-install ros-humble-isaac-ros-peoplesemseg-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula4. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_carter_bringup perceptor.launch.py stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map/ occupancy_map_yaml_file:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml vslam_enable_slam:=True mode:=rosbag rosbag:=<BAG_PATH>1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Use
rosdep
to install the package’s dependencies:sudo apt update rosdep update rosdep install -i -r --from-paths ${ISAAC_ROS_WS}/src/nova_carter/nova_carter_bringup/ \ --rosdistro humble -y
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-ess-models-install ros-humble-isaac-ros-peoplesemseg-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula
Build the ROS package in the Docker container:
colcon build --symlink-install --packages-up-to nova_carter_bringup source install/setup.bash5. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_carter_bringup perceptor.launch.py stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map/ occupancy_map_yaml_file:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml vslam_enable_slam:=True mode:=rosbag rosbag:=<BAG_PATH>
If the odom frame already exists in the /tf message in the rosbag, it may conflict with
the odom frame generated by cuVSLAM. To resolve it, use replay_additional_args:="--remap /tf:=/tf_old"
to remap the existing /tf topic.
Replay is at real time by default. You can adjust the replay rate by replay_rate:=<replay_rate>
.
When running a short localization bag, use replay_rate:=0.1
to slow down the replay,
allowing more localization triggers and increasing the chances of successful localization.
Visualization Setup
Follow the steps to set up visualization, then you should see a visualization similar to the one in the quickstart example below:
Follow the Foxglove Setup tutorial to launch Foxglove.
Download nova_carter_perceptor_localization.json layout configuration file.
Import the downloaded layout file into Foxglove.
Quickstart with a Pre-recorded Rosbag
Download
r2b_galileo
dataset from the r2b 2024 dataset on NGC. It is a 20-second bag containing synchronized images from four stereo camera pairs. Place the dataset at$ISAAC_ROS_WS/isaac_ros_assets/r2b_2024/r2b_galileo
.To create the map from the downloaded
r2b_galileo
rosbag, run the following command:
1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Install the required Debian packages:
sudo apt update
sudo apt-get install -y ros-humble-isaac-mapping-ros
source /opt/ros/humble/setup.bash
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-ess-models-install ros-humble-isaac-ros-peoplesemseg-models-install
source /opt/ros/humble/setup.bash
ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula
ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula
ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula
4. Declare ROS_DOMAIN_ID
with the same unique ID (number between 0 and
101) on every bash
instance inside the Docker container:
export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 run isaac_mapping_ros create_map_offline.py --sensor_data_bag=$ISAAC_ROS_WS/isaac_ros_assets/r2b_2024/r2b_galileo --base_output_folder=$ISAAC_ROS_WS/isaac_ros_assets/maps
To run visual global localization, visual SLAM and the camera-based perception with the map generated in the previous step, run the following command:
Note
We use replay rate of 0.1 for the example because the rosbag is short and does not contain stationary frames at the beginning. Slowing down rosbag replay helps global localization to initialize correctly.
Pull the Docker image:
docker pull nvcr.io/nvidia/isaac/nova_carter_bringup:release_3.2-aarch64
Run the Docker image:
docker run --privileged --network host \ -v /dev/*:/dev/* \ -v /tmp/argus_socket:/tmp/argus_socket \ -v /etc/nova:/etc/nova \ -v $ISAAC_ROS_WS:$ISAAC_ROS_WS \ nvcr.io/nvidia/isaac/nova_carter_bringup:release_3.2-aarch64 \ ros2 launch nova_carter_bringup perceptor.launch.py rosbag:=$ISAAC_ROS_WS/isaac_ros_assets/r2b_2024/r2b_galileo mode:=rosbag stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<map_dir>/cuvslam_map vgl_map_dir:=<map_dir>/cuvgl_map occupancy_map_yaml_file:=<map_dir>/occupancy_map.yaml replay_rate:=0.1 vslam_enable_slam:=True1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Install the required Debian packages:
sudo apt update sudo apt-get install -y ros-humble-nova-carter-bringup source /opt/ros/humble/setup.bash
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-ess-models-install ros-humble-isaac-ros-peoplesemseg-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula4. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_carter_bringup perceptor.launch.py rosbag:=$ISAAC_ROS_WS/isaac_ros_assets/r2b_2024/r2b_galileo mode:=rosbag stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<map_dir>/cuvslam_map vgl_map_dir:=<map_dir>/cuvgl_map occupancy_map_yaml_file:=<map_dir>/occupancy_map.yaml replay_rate:=0.1 vslam_enable_slam:=True1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Use
rosdep
to install the package’s dependencies:sudo apt update rosdep update rosdep install -i -r --from-paths ${ISAAC_ROS_WS}/src/nova_carter/nova_carter_bringup/ \ --rosdistro humble -y
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-ess-models-install ros-humble-isaac-ros-peoplesemseg-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula
Build the ROS package in the Docker container:
colcon build --symlink-install --packages-up-to nova_carter_bringup source install/setup.bash5. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_carter_bringup perceptor.launch.py rosbag:=$ISAAC_ROS_WS/isaac_ros_assets/r2b_2024/r2b_galileo mode:=rosbag stereo_camera_configuration:=front_left_right_configuration disable_vgl:=False vslam_load_map_folder_path:=<map_dir>/cuvslam_map vgl_map_dir:=<map_dir>/cuvgl_map occupancy_map_yaml_file:=<map_dir>/occupancy_map.yaml replay_rate:=0.1 vslam_enable_slam:=True
The map_dir
specifies the path to the output map directory generated by the map creation step.
Visualize the outputs:
You will see the map and odom frames nearly overlapping at the initial position on the left, and the baselink frame moving to the right. Near the initial position, a green arrow represents the pose hint sent from visual global localization. The purple trajectory, following the baselink frame indicates the robot’s motion path, computed by visual SLAM.
In the diagnostic window, the localized_in_exist_map field indicates whether the robot is localized within the map. If this field is false, the robot pose and motion path is in the odom frame starting from the identity pose at the beginning of the motion. Once the field changes to true, the robot is successfully localized to the map frame, and its motion path aligns with the map.
After you have generated the maps using the above script, refer to the following tutorials to visualize additional perception outputs: