isaac_ros_yolov8

Source code on GitHub.

Quickstart

Set Up Development Environment

  1. Set up your development environment by following the instructions in getting started.

  2. Clone isaac_ros_common under ${ISAAC_ROS_WS}/src.

    cd ${ISAAC_ROS_WS}/src && \
       git clone -b release-3.2 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
    
  3. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.

    Note

    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.

Download Quickstart Assets

  1. Download quickstart data from NGC:

    Make sure required libraries are installed.

    sudo apt-get install -y curl jq tar
    

    Then, run these commands to download the asset from NGC:

    NGC_ORG="nvidia"
    NGC_TEAM="isaac"
    PACKAGE_NAME="isaac_ros_yolov8"
    NGC_RESOURCE="isaac_ros_yolov8_assets"
    NGC_FILENAME="quickstart.tar.gz"
    MAJOR_VERSION=3
    MINOR_VERSION=2
    VERSION_REQ_URL="https://catalog.ngc.nvidia.com/api/resources/versions?orgName=$NGC_ORG&teamName=$NGC_TEAM&name=$NGC_RESOURCE&isPublic=true&pageNumber=0&pageSize=100&sortOrder=CREATED_DATE_DESC"
    AVAILABLE_VERSIONS=$(curl -s \
        -H "Accept: application/json" "$VERSION_REQ_URL")
    LATEST_VERSION_ID=$(echo $AVAILABLE_VERSIONS | jq -r "
        .recipeVersions[]
        | .versionId as \$v
        | \$v | select(test(\"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\"))
        | split(\".\") | {major: .[0]|tonumber, minor: .[1]|tonumber, patch: .[2]|tonumber}
        | select(.major == $MAJOR_VERSION and .minor <= $MINOR_VERSION)
        | \$v
        " | sort -V | tail -n 1
    )
    if [ -z "$LATEST_VERSION_ID" ]; then
        echo "No corresponding version found for Isaac ROS $MAJOR_VERSION.$MINOR_VERSION"
        echo "Found versions:"
        echo $AVAILABLE_VERSIONS | jq -r '.recipeVersions[].versionId'
    else
        mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets && \
        FILE_REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/\
    versions/$LATEST_VERSION_ID/files/$NGC_FILENAME" && \
        curl -LO --request GET "${FILE_REQ_URL}" && \
        tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets && \
        rm ${NGC_FILENAME}
    fi
    
  2. Download the model of your choice from Ultralytics YOLOv8. For this example, we use YOLOv8s.

    cd Downloads && \
       wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt
    
  3. Convert the PyTorch model (.pt) to a general ONNX model (.onnx). Export to ONNX following instructions given below or here. Arguments can be specified for FP16 quantization during this step. This ONNX model is converted to a TensorRT engine file and used with the Isaac ROS TensorRT node for inference. You can use netron to visualize the ONNX model and note input and output names and dimensions.

    This can be done by first installing ultralytics and onnx via pip:

    pip3 install ultralytics
    pip3 install onnx
    

    Afterwards, convert the model from a .pt file to a .onnx model using ultralytics. This can be done by running:

    python3
    

    Then within python3, export the model:

    >> from ultralytics import YOLO
    >> model = YOLO('yolov8s.pt')
    >> model.export(format='onnx')
    

    Exit the interactive python shell and copy the generated .onnx model into the designated location for Isaac ROS (${ISAAC_ROS_WS}/isaac_ros_assets/models/yolov8):

    mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets/models/yolov8
    cp yolov8s.onnx ${ISAAC_ROS_WS}/isaac_ros_assets/models/yolov8
    

Build isaac_ros_yolov8

  1. Launch the Docker container using the run_dev.sh script:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  2. Install the prebuilt Debian package:

    sudo apt-get update
    
    sudo apt-get install -y ros-humble-isaac-ros-yolov8 ros-humble-isaac-ros-dnn-image-encoder ros-humble-isaac-ros-tensor-rt
    

Run Launch File

  1. Enter the Docker container in Jetson:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
       ./scripts/run_dev.sh
    
  1. Continuing inside the Docker container, install the following dependencies:

    sudo apt-get update
    
    sudo apt-get install -y ros-humble-isaac-ros-examples
    
  2. Run the following launch file to spin up a demo of this package using the quickstart rosbag:

    cd /workspaces/isaac_ros-dev && \
    ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=yolov8 interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_yolov8/quickstart_interface_specs.json \
       model_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/yolov8/yolov8s.onnx engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/yolov8/yolov8s.plan
    
  3. Open a second terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  4. Run the rosbag file to simulate an image stream:

    ros2 bag play -l ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_yolov8/quickstart.bag
    

Visualize Results

  1. Open a new terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
       ./scripts/run_dev.sh
    
  2. Run the YOLOv8 visualization script:

    ros2 run isaac_ros_yolov8 isaac_ros_yolov8_visualizer.py
    
  3. Open another terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
       ./scripts/run_dev.sh
    
  4. Visualize and validate the output of the package with rqt_image_view:

    ros2 run rqt_image_view rqt_image_view /yolov8_processed_image
    

    Your output should look like this:

    RQT showing detection of people cycling and bikes

Troubleshooting

Isaac ROS Troubleshooting

For solutions to problems with Isaac ROS, please check here.

Deep Learning Troubleshooting

For solutions to problems with using DNN models, please check here.

API

Usage

ros2 launch isaac_ros_yolov8 isaac_ros_yolov8_visualize.launch.py model_file_path:=<model_file_path> engine_file_path:=<engine_file_path> input_binding_names:=<input_binding_names> output_binding_names:=<output_binding_names> network_image_width:=<network_image_width> network_image_height:=<network_image_height> force_engine_update:=<force_engine_update> image_mean:=<image_mean> image_stddev:=<image_stddev> confidence_threshold:=<confidence_threshold> nms_threshold:=<nms_threshold>

Yolov8DecoderNode

ROS Parameters

ROS Parameter

Type

Default

Description

tensor_name

string

"output_tensor"

Name of the inferred output tensor published by the Managed NITROS Publisher. The decoder uses this name to get the output tensor.

confidence_threshold

float

0.25

Detection confidence threshold. Used to filter candidate detections during Non-Maximum Suppression (NMS).

nms_threshold

float

0.45

NMS IOU threshold.

ROS Topics Subscribed

ROS Topic

Interface

Description

tensor_sub

isaac_ros_tensor_list_interfaces/TensorList

Tensor list from the managed NITROS subscriber that represents the inferred aligned bounding boxes.

ROS Topics Published

ROS Topic

Interface

Description

detections_output

vision_msgs/Detection2DArray

Aligned image bounding boxes with detection class.