isaac_ros_yolov8
Source code on GitHub.
Quickstart
Set up your development environment by following the instructions here.
Clone
isaac_ros_common
and this repository under${ISAAC_ROS_WS}/src
.cd ${ISAAC_ROS_WS}/src
git clone -b release-3.1 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
git clone -b release-3.1 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_object_detection.git isaac_ros_object_detection
Launch the Docker container using the
run_dev.sh
script:cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Install this package’s dependencies.
sudo apt-get install -y ros-humble-isaac-ros-yolov8 ros-humble-isaac-ros-tensor-rt ros-humble-isaac-ros-dnn-image-encoder
Download the model of your choice from Ultralytics YOLOv8.
Warning
This step must be performed on
x86_64
.For this example, we use YOLOv8s.
cd /tmp && \ wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt
Export to ONNX following instructions here.
Warning
This step must be performed on
x86_64
.Arguments can be specified for FP16 quantization during this step. This ONNX model is converted to a TensorRT engine file and used with the Isaac ROS TensorRT node for inference. You can use netron to visualize the ONNX model and note input and output names and dimensions.
This can be done by first installing
ultralytics
andonnx
via pip:pip3 install ultralytics pip3 install onnx
Afterwards, convert the model from a
.pt
file to a.onnx
model usingultralytics
. This can be done by running:cd /tmp && \ python3
Then within
python3
, export the model:>> from ultralytics import YOLO >> model = YOLO('yolov8s.pt') >> model.export(format='onnx')
If you are planning on using Jetson, copy the generated
.onnx
model into the Jetson, and then copy it over intoaarch64
Docker container.We will assume that you already performed the transfer of the model onto the Jetson in the directory
~/Downloads
.Enter the Docker container in Jetson:
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh
Make a directory called
/tmp
in Jetson:mkdir -p /tmp
Outside the container, copy the generated
onnx
model:cd ~/Downloads && \ docker cp yolov8s.onnx isaac_ros_dev-aarch64-container:/tmp
Run the following launch file to perform YOLOv8 detection using TensorRT:
cd /workspaces/isaac_ros-dev && \ ros2 launch isaac_ros_yolov8 isaac_ros_yolov8_visualize.launch.py model_file_path:=/tmp/yolov8s.onnx engine_file_path:=/tmp/yolov8s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640 force_engine_update:=False image_mean:=[0.0,0.0,0.0] image_stddev:=[1.0,1.0,1.0] input_image_width:=640 input_image_height:=640 confidence_threshold:=0.25 nms_threshold:=0.45Note
A method of running the detections without visualization is available by using the
yolov8_tensor_rt.launch.py
launch file in this package.The output bounding boxes here will be published on the
/detections/output
topic.
Run the image publisher node to publish input images for inference. A sample image (
people_cycles.jpg
) is provided here:cd /workspaces/isaac_ros-dev/src/isaac_ros_object_detection && \ ros2 run image_publisher image_publisher_node resources/people_cycles.jpg --ros-args --remap /image_raw:=/image
Visualize and validate the output of the package in the
rqt_image_view
window. Inside the RQT image window that should have popped up, the output should look like:
API
Usage
ros2 launch isaac_ros_yolov8 yolov8_tensor_rt.launch.py model_file_path:=<model_file_path> engine_file_path:=<engine_file_path> input_binding_names:=<input_binding_names> output_binding_names:=<output_binding_names> network_image_width:=<network_image_width> network_image_height:=<network_image_height> force_engine_update:=<force_engine_update> image_mean:=<image_mean> image_stddev:=<image_stddev>
ROS Parameters
ROS Parameter |
Type |
Default |
Description |
---|---|---|---|
|
|
|
Name of the inferred output tensor published by the Managed NITROS Publisher. The decoder uses this name to get the output tensor. |
|
|
|
Detection confidence threshold. Used to filter candidate detections during Non-Maximum Suppression (NMS). |
|
|
|
NMS IOU threshold. |
ROS Topics Subscribed
ROS Topic |
Interface |
Description |
---|---|---|
|
Tensor list from the managed NITROS subscriber that represents the inferred aligned bounding boxes. |
ROS Topics Published
ROS Topic |
Interface |
Description |
---|---|---|
|
Aligned image bounding boxes with detection class. |