Isaac ROS Dev

Isaac ROS Dev is a container-based workflow to enable production robotics by building your ROS workspace in a consistent, reproducible environment and then being able to deploy the resulting build products as a single, hermetic artifact to run on robot.

Development Environment

The Isaac ROS development environment is run within a Docker container on the target platform using the script run_dev.sh. After you’ve setup your ROS workspace on your machine with your packages, third party packages, and one or many Isaac ROS packages, you can use this script to launch a Docker container that contains all the system libraries and dependencies to build.

run_dev.sh sets up a development environment containing ROS 2 and key versions of NVIDIA frameworks prepared for both x86_64 and Jetson. This script prepares a Docker image with a supported configuration for the host machine and delivers you into a bash prompt running inside the container. From here, you are ready to execute ROS 2 build/run commands with your host workspace files, which are mounted into the container and available to edit both on the host and in the container. Note: If you run this script again while it is running, it will attach a new shell to the same running container.

$PATH_TO_ISAAC_ROS_COMMON/scripts/run_dev.sh -d $PATH_TO_ROS_WORKSPACE

For a complete description of arguments for run_dev.sh, see here.

Image keys

run_dev.sh prepares a base Docker image and mounts your target workspace into the running container. The base Docker image itself is assembled by the build_image_layers.sh script which parses a period-delimited string called an image key. The image key specifies an order of matching Dockerfiles to be built as layers in sequence and fed into the next.

build_image_layers.sh matches corresponding Dockerfiles to the image key. For example, an image key of first.second.third could match {Dockerfile.first, Dockerfile.second, Dockerfile.third}, where run_dev.sh will build the image for Dockerfile.first, then use that image as the base image while building Dockerfile.second, and so on. The file matching looks for the largest subsequence, so first.second.third could also match {Dockerfile.first.second, Dockerfile.third} depending on which files exist in the search paths.

Configuring run_dev.sh

Using this pattern, you can add your own layers on top of the base images provided in Isaac ROS to set up your environment your way, such as installing additional packages to use.

If you write a file with the name .isaac_ros_common-config in your home directory, you can configure the development environment. The following keys configured in .isaac_ros_common-config can let you specify your own image key and paths to your own Dockerfiles for run_dev.sh to use.

An example .isaac_ros_common-config file:

CONFIG_IMAGE_KEY=ros2_humble.mine
CONFIG_DOCKER_SEARCH_DIRS=(../ros_ws/src/isaac_ros_common/docker ../lib/src/gxf/docker)
CONFIG_CONTAINER_NAME_SUFFIX=mysuffix
BASE_DOCKER_REGISTRY_NAMES=("nvcr.io/isaac/ros" "some.other/image/name")

For example, if you had the following directory structure:

~/.isaac_ros_common-config
workspaces/isaac_ros-dev
     workspaces/isaac_ros-dev/ros_ws
         workspaces/isaac_ros-dev/ros_ws/mystuff
         workspaces/isaac_ros-dev/ros_ws/mystuff/Dockerfile.mine
         workspaces/isaac_ros-dev/ros_ws/mystuff/myfile.txt

where Dockerfile.mine is your own custom image layer, using its host directory for the Docker build context:

ARG BASE_IMAGE
FROM ${BASE_IMAGE}

... steps ...

COPY myfile.txt /myfile.txt

... more steps ...

You could then extend the base image launched as a container by run_dev.sh as follows.

~/.isaac_ros_common-config

CONFIG_IMAGE_KEY="ros2_humble.mine"
CONFIG_DOCKER_SEARCH_DIRS=(workspaces/isaac_ros-dev/ros_ws/mystuff)

This configures the image key to match with mine included and specifies where to look first for Dockerfiles (workspaces/isaac_ros-dev/ros_ws/mystuff)) before checking the default location.

Prebuilt images

As part of the build image preparation when invoking run_dev.sh, Dockerfiles will be matched to the target image key. After the Dockerfiles have been determined, build_image_layers.sh will then check the remote Docker registry for an image that matches the hash of the matched Dockerfiles. If one exists, it will pull it down and use that image rather than try to build the image locally. If you do change the contents of any Dockerfiles, you may change the hash and cause build_image_layers.sh to build the Docker images locally again.

NVIDIA provides prebuilt Docker images for standard image key aarch64.ros2_humble and x86_64.ros2_humble` on NVCR (NVIDIA Container Registry) for your convenience. If your local copies of ``Dockerfile.x86_64, Dockerfile.aarch64, and Dockerfile.ros2_humble match the hashed contents of these prebuilt images, run_dev.sh will find and download them to build on rather than start from scratch.

For a complete description of arguments for build_image_layers.sh, see here.

The following keys configured in .isaac_ros_common-config let you determine which remote registries will be checked. An example .isaac_ros_common-config file:

CONFIG_IMAGE_KEY=ros2_humble.mine
CONFIG_DOCKER_SEARCH_DIRS=(../ros_ws/src/isaac_ros_common/docker ../lib/src/gxf/docker)
CONFIG_CONTAINER_NAME_SUFFIX=mysuffix
BASE_DOCKER_REGISTRY_NAMES=("nvcr.io/isaac/ros" "some.other/image/name")

Offline dev environment

If you have already built your base image before and you want run_dev.sh to not try to rebuild the base image and perform any operations that could require internet connectivity, you can use the -b/--skip_image_build argument to run offline.

Deployment

After you’ve built your ROS workspace, you will have one of two products: A) Debians built from bloom-generate or B) ROS workspace install/ directory. The docker_deploy.sh script let’s you package these build products into a runnable Docker image to deploy to your robot.

Using docker_deploy.sh

Once you have your built ROS workspace and/or set of Debians to install, the docker_deploy.sh script packages the ROS workspace directory, other directories, tarballs, and/or Debians to install onto a base image. You can also configure a launch file to run by default each time the container is started. The base image can be specified as an image key for your own custom production environment that will be built with build_image_layers.sh. There are more options for you to customize how the final build image will be assembled with your build products.

For a complete description of arguments for docker_deploy.sh, see here.

$PATH_TO_ISAAC_ROS_COMMON/scripts/docker_deploy.sh \
   --base_image_key "aarch64.ros2_humble" \
   --custom_apt_source "deb https://mycool-apt-get-server.com/ros-debian-local focal main" \
   --install_debians "mydebian01,libnvvpi3,tensorrt" \
   --include_dir /workspaces/isaac_ros-dev/tests \
   --include_dir /home/admin/scripts:/home/admin/scripts \
   --ros_ws /workspaces/isaac_ros-dev/ros_ws  \
   --launch_package "isaac_ros_image_proc" \
   --launch_file "isaac_ros_image_flip.launch.py" \
   --name "my_docker_registry.io/myimage"

In this example, a Docker image named my_docker_registry.io/myimage will be built by configuring one custom apt sources, installing three Debians packages, and packaging two host directories and a ROS workspace at /workspaces/isaac_ros-dev/ros_ws on top of a base image built using the key aarch64.ros2_humble. Finally, a launch file and its package were configured to run by default when the container is launched. You can then push this image to your remote Docker registry my_docker.registry.io and run the image as follows.

docker run --rm -it --gpus all my_docker_registry.io/myimage

This will launch the container and run isaac_ros_image_flip.launch.py by default.