Tutorial: Autonomous Navigation with Isaac Perceptor and Nav2 on the Nova Carter
This tutorial enables you to run autonomous navigation with the Nova Carter robot. The tutorial uses the Isaac Perceptor stack for local camera-based 3D perception, AMCL for Lidar localization and Nav2 for navigation.
For this tutorial, it is assumed that you have successfully completed the perceptor tutorial.
Running the Application
SSH into the robot (instructions).
Make sure you have successfully connected the PS5 joystick to the robot (instructions).
Make sure you export the environment variable
MAPS_FOLDER
to the path where the maps are stored. e.g.export MAPS_FOLDER=/path/to/maps
Follow the instructions below to launch the app.
Navigation with AMCL localization and wheel odometry
If you don’t have an occupancy map of your environment, follow the Tutorial: Generate an Occupancy Map for Navigation to create one.
Build or install the required packages, and launch the app with the following command:
Pull the Docker image:
docker pull nvcr.io/nvidia/isaac/nova_carter_bringup:release_3.2-aarch64
Run the Docker image:
docker run --privileged --network host \ -v /dev/*:/dev/* \ -v /tmp/argus_socket:/tmp/argus_socket \ -v /etc/nova:/etc/nova \ -v $MAPS_FOLDER:$MAPS_FOLDER \ nvcr.io/nvidia/isaac/nova_carter_bringup:release_3.2-aarch64 \ ros2 launch nova_carter_bringup navigation.launch.py map_yaml_path:=<path_to_map_yaml>1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Install the required Debian packages:
sudo apt update sudo apt-get install -y ros-humble-nova-carter-bringup source /opt/ros/humble/setup.bash
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-peoplesemseg-models-install ros-humble-isaac-ros-ess-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula4. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_carter_bringup navigation.launch.py map_yaml_path:=<path_to_map_yaml>1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Use
rosdep
to install the package’s dependencies:sudo apt update rosdep update rosdep install -i -r --from-paths ${ISAAC_ROS_WS}/src/nova_carter/nova_carter_bringup/ \ --rosdistro humble -y
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-peoplesemseg-models-install ros-humble-isaac-ros-ess-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula
Build the ROS package in the Docker container:
colcon build --symlink-install --packages-up-to nova_carter_bringup source install/setup.bash5. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_carter_bringup navigation.launch.py map_yaml_path:=<path_to_map_yaml>
Replace <path_to_map_yaml>
with the path of the map YAML file.
Navigation with visual localization and odometry from Isaac Perceptor
You can use the visual localization and odometry from Isaac Perceptor instead of the AMCL localization and wheel odometry.
Follow the Tutorial: Mapping and Localization with Isaac Perceptor to create necessary maps as in the Map Types. Once the maps are created, use them with the following command:
Pull the Docker image:
docker pull nvcr.io/nvidia/isaac/nova_carter_bringup:release_3.2-aarch64
Run the Docker image:
docker run --privileged --network host \ -v /dev/*:/dev/* \ -v /tmp/argus_socket:/tmp/argus_socket \ -v /etc/nova:/etc/nova \ -v $MAPS_FOLDER:$MAPS_FOLDER \ nvcr.io/nvidia/isaac/nova_carter_bringup:release_3.2-aarch64 \ ros2 launch nova_carter_bringup navigation.launch.py stereo_camera_configuration:=front_left_right_configuration map_yaml_path:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml enable_visual_localization:=True vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map vslam_enable_slam:=True1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Install the required Debian packages:
sudo apt update sudo apt-get install -y ros-humble-nova-carter-bringup source /opt/ros/humble/setup.bash
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-peoplesemseg-models-install ros-humble-isaac-ros-ess-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula4. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_carter_bringup navigation.launch.py stereo_camera_configuration:=front_left_right_configuration map_yaml_path:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml enable_visual_localization:=True vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map vslam_enable_slam:=True1. Make sure you followed the Prerequisites and you are inside the Isaac ROS Docker container.
Use
rosdep
to install the package’s dependencies:sudo apt update rosdep update rosdep install -i -r --from-paths ${ISAAC_ROS_WS}/src/nova_carter/nova_carter_bringup/ \ --rosdistro humble -y
Install the required assets:
sudo apt-get install -y ros-humble-isaac-ros-peoplesemseg-models-install ros-humble-isaac-ros-ess-models-install source /opt/ros/humble/setup.bash ros2 run isaac_ros_ess_models_install install_ess_models.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_vanilla.sh --eula ros2 run isaac_ros_peoplesemseg_models_install install_peoplesemsegnet_shuffleseg.sh --eula
Build the ROS package in the Docker container:
colcon build --symlink-install --packages-up-to nova_carter_bringup source install/setup.bash5. Declare
ROS_DOMAIN_ID
with the same unique ID (number between 0 and 101) on every bash instance inside the Docker container:export ROS_DOMAIN_ID=<unique ID>
Run the file:
ros2 launch nova_carter_bringup navigation.launch.py stereo_camera_configuration:=front_left_right_configuration map_yaml_path:=<PATH_TO_MAP_FOLDER>/occupancy_map.yaml enable_visual_localization:=True vslam_load_map_folder_path:=<PATH_TO_MAP_FOLDER>/cuvslam_map/ vgl_map_dir:=<PATH_TO_MAP_FOLDER>/cuvgl_map vslam_enable_slam:=True
The PATH_TO_MAP_FOLDER
specifies the path to the output map directory generated
by the map creation step. In this setup, cuVSLAM first localizes within the
existing map using the pose hint provided by cuVGL. Once localization is successful,
the pose output from cuVSLAM can be used for navigation.
Note
Some users have reported sporadic WiFi connection issues and the robot sometimes not responding to goal poses when using the pre-built Docker images. We are actively working on improving these issues.
Configurations
You can configure the following modules in the navigation app with these options:
Localization: by default, AMCL localization uses 3D Lidar.
To use visual localization, set the option
enable_visual_localization:=True
and provide both the visual global localization map and the cuVSLAM map. Enabling visual localization will automatically enable visual odometry and disable wheel odometry.To use AMCL localization with 2D Lidar, set
enable_3d_lidar_localization:=False
.
Odometry: wheel odometry is enabled by default.
To use visual odometry, set
enable_wheel_odometry:=False
.
Costmap layers: the default configuration includes a 2D Lidar costmap and Nvblox costmap.
To add a 3D Lidar costmap layer, set
enable_3d_lidar_costmap:=True
.
For the stereo camera configurations, refer to the Tutorial: Stereo Camera Configurations for Isaac Perceptor.
Note
Navigation is only possible when stereo_camera_configuration is set to front_configuration or front_left_right_configuration or no_cameras (for lidar only setup) options.
Visualizing the Outputs and Sending Goals
Complete the Foxglove setup guide. Make sure to follow the instructions of installing additional Nvblox Foxglove extension.
Download all Foxglove layout configurations available in nova_carter repository.
Open Foxglove Studio on your remote machine. Open the
nova_carter_navigation.json
layout file downloaded in the previous step.Validate that you can see a visualization of the map, local costmap, and footprint of the robot. Verify that you see a visualization similar to the following:
Note
By default, the costmap layers are use the pessimistic nvblox costmap locally and optimistic one globally. The pessimistic costmap marks all “unknown” cells as obstructed so that the robot is locally more risk averse.
You can send a goal pose setpoint to Nav2 using the pose publish button in Foxglove as shown below:
Note
It is important to ensure that the Foxglove “Display frame” in the 3D panel is set to “map” before sending the goals. If goals are sent and the robot does not move, it is best to check first that the correct “Display frame” has been set.
Note
You can also use the joystick to override Nav2 autonomous control at any point.
If you use visual localization for navigation, refer to the visualization section in this Quickstart tutorial to view the localization result.
After the robot is successfully localized, you’ll see a visualization similar to the following:
The video begins by displaying the local costmap, then transitions to the global costmap.
Other Details
To learn about our parameters changes for improving AMCL localization performance, refer to the following document: