isaac_ros_detectnet

Source code on GitHub.

Quickstart

Set Up Development Environment

  1. Set up your development environment by following the instructions in getting started.

  2. Clone isaac_ros_common under ${ISAAC_ROS_WS}/src.

    cd ${ISAAC_ROS_WS}/src && \
       git clone -b release-3.1 https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git isaac_ros_common
    
  3. (Optional) Install dependencies for any sensors you want to use by following the sensor-specific guides.

    Warning

    We strongly recommend installing all sensor dependencies before starting any quickstarts. Some sensor dependencies require restarting the Isaac ROS Dev container during installation, which will interrupt the quickstart process.

Download Quickstart Assets

  1. Download quickstart data from NGC:

    Make sure required libraries are installed.

    sudo apt-get install -y curl jq tar
    

    Then, run these commands to download the asset from NGC:

    NGC_ORG="nvidia"
    NGC_TEAM="isaac"
    PACKAGE_NAME="isaac_ros_detectnet"
    NGC_RESOURCE="isaac_ros_detectnet_assets"
    NGC_FILENAME="quickstart.tar.gz"
    MAJOR_VERSION=3
    MINOR_VERSION=1
    VERSION_REQ_URL="https://catalog.ngc.nvidia.com/api/resources/versions?orgName=$NGC_ORG&teamName=$NGC_TEAM&name=$NGC_RESOURCE&isPublic=true&pageNumber=0&pageSize=100&sortOrder=CREATED_DATE_DESC"
    AVAILABLE_VERSIONS=$(curl -s \
        -H "Accept: application/json" "$VERSION_REQ_URL")
    LATEST_VERSION_ID=$(echo $AVAILABLE_VERSIONS | jq -r "
        .recipeVersions[]
        | .versionId as \$v
        | \$v | select(test(\"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\"))
        | split(\".\") | {major: .[0]|tonumber, minor: .[1]|tonumber, patch: .[2]|tonumber}
        | select(.major == $MAJOR_VERSION and .minor <= $MINOR_VERSION)
        | \$v
        " | sort -V | tail -n 1
    )
    if [ -z "$LATEST_VERSION_ID" ]; then
        echo "No corresponding version found for Isaac ROS $MAJOR_VERSION.$MINOR_VERSION"
        echo "Found versions:"
        echo $AVAILABLE_VERSIONS | jq -r '.recipeVersions[].versionId'
    else
        mkdir -p ${ISAAC_ROS_WS}/isaac_ros_assets && \
        FILE_REQ_URL="https://api.ngc.nvidia.com/v2/resources/$NGC_ORG/$NGC_TEAM/$NGC_RESOURCE/\
    versions/$LATEST_VERSION_ID/files/$NGC_FILENAME" && \
        curl -LO --request GET "${FILE_REQ_URL}" && \
        tar -xf ${NGC_FILENAME} -C ${ISAAC_ROS_WS}/isaac_ros_assets && \
        rm ${NGC_FILENAME}
    fi
    

Build isaac_ros_detectnet

  1. Launch the Docker container using the run_dev.sh script:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  2. Install the prebuilt Debian package:

    sudo apt-get install -y ros-humble-isaac-ros-detectnet ros-humble-isaac-ros-dnn-image-encoder ros-humble-isaac-ros-triton
    

Run Launch File

  1. Continuing inside the Docker container, run the quickstart setup script which will download the PeopleNet Model from NVIDIA GPU Cloud(NGC)

    ros2 run isaac_ros_detectnet setup_model.sh --height 632 --width 1200 --config-file quickstart_config.pbtxt
    
  2. Continuing inside the Docker container, install the following dependencies:

    sudo apt-get install -y ros-humble-isaac-ros-examples
    
  3. Run the following launch file to spin up a demo of this package using the quickstart rosbag:

    ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=detectnet interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_detectnet/quickstart_interface_specs.json
    
  4. Open a new terminal inside the Docker container:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
  5. Run the rosbag file to simulate an image stream:

    ros2 bag play -l ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_detectnet/rosbags/detectnet_rosbag --remap image:=image_rect camera_info:=camera_info_rect
    

Visualize Results

  1. Open a new terminal and run the isaac_ros_detectnet_visualizer:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
    cd ${ISAAC_ROS_WS} && \
    source install/setup.bash && \
    ros2 run isaac_ros_detectnet isaac_ros_detectnet_visualizer.py --ros-args --remap image:=image_rect
    
  2. Open a new terminal and run rqt_image_view to visualize the output:

    cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
    ./scripts/run_dev.sh
    
    ros2 run rqt_image_view rqt_image_view /detectnet_processed_image
    
  3. You should see an output as shown below:

    RQT showing detection of people

Try More Examples

To continue your exploration, check out the following suggested examples:

This package only supports models based on the Detectnet_v2 architecture. Some of the supported DetectNet models from NGC:

Model Name

Use Case

PeopleNet AMR

People counting with a mobile robot

PeopleNet

People counting, heatmap generation, social distancing

TrafficCamNet

Detect and track cars

DashCamNet

Identify objects from a moving object

FaceDetectIR

Detect faces in a dark environment with IR camera

To learn how to use this models, click here.

ROS 2 Graph Configuration

To run the DetectNet object detection inference, the following ROS 2 nodes must be set up and running:

https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_object_detection/ros2_detectnet_node_setup.svg/
  1. Isaac ROS DNN Image encoder: This takes an image message and converts it to a tensor (TensorList isaac_ros_tensor_list_interfaces/TensorList) that can be processed by the network.

  2. Isaac ROS DNN Inference - Triton: This executes the DetectNet network and takes, as input, the tensor from the DNN Image Encoder.

    Note

    The Isaac ROS TensorRT package is not able to perform inference with DetectNet models at this time.

    The output is a TensorList message containing the encoded detections. Use the parameters model_name and model_repository_paths to point to the model folder and set the model name. The .plan file should be located at $model_repository_path/$model_name/1/model.plan

  3. Isaac ROS Detectnet Decoder: This node takes the TensorList with encoded detections as input, and outputs Detection2DArray messages for each frame. See the following section for the parameters.

Troubleshooting

Isaac ROS Troubleshooting

For solutions to problems with Isaac ROS, see troubleshooting.

Deep Learning Troubleshooting

For solutions to problems with using DNN models, see troubleshooting deeplearning.

API

Usage

ros2 launch isaac_ros_detectnet isaac_ros_detectnet.launch.py label_list:=<list of labels> enable_confidence_threshold:=<enable confidence thresholding> enable_bbox_area_threshold:=<enable bbox size thresholding> enable_dbscan_clustering:=<enable dbscan clustering> confidence_threshold:=<minimum confidence value> min_bbox_area:=<minimum bbox area value> dbscan_confidence_threshold:=<minimum confidence for dbscan algorithm> dbscan_eps:=<epsilon distance> dbscan_min_boxes:=<minimum returned boxes> dbscan_enable_athr_filter:=<area-to-hit-ratio filter> dbscan_threshold_athr:=<area-to-hit ratio threshold> dbscan_clustering_algorithm:=<choice of clustering algorithm> bounding_box_scale:=<bounding box normalization value> bounding_box_offset:=<XY offset for bounding box>

ROS Parameters

ROS Parameter

Type

Default

Description

label_list

string[]

{"person", "bag", "face"}

The list of labels. These are loaded from labels.txt (downloaded with the model)

confidence_threshold

double

0.35

The min value of confidence used to threshold detections before clustering

min_bbox_area

double

100

The min value of bounding box area used to threshold detections before clustering

dbscan_confidence_threshold

double

0.35

Holds the epsilon to control merging of overlapping boxes. Refer to OpenCV groupRectangles and DBSCAN documentation for more information on epsilon.

dbscan_eps

double

0.7

Holds the epsilon to control merging of overlapping boxes. Refer to OpenCV groupRectangles and DBSCAN documentation for more information on epsilon.

dbscan_min_boxes

int

1

The minimum number of boxes to return.

dbscan_enable_athr_filter

int

0

Enables the area-to-hit ratio (ATHR) filter. The ATHR is calculated as: ATHR = sqrt(clusterArea) / nObjectsInCluster.

dbscan_threshold_athr

double

0.0

The area-to-hit ratio threshold.

dbscan_clustering_algorithm

int

1

The clustering algorithm selection. (1: Enables DBScan clustering, 2: Enables Hybrid clustering, resulting in more boxes that will need to be processed with NMS or other means of reducing overlapping detections.

bounding_box_scale

double

35.0

The scale parameter, which should match the training configuration.

bounding_box_offset

double

0.0

Bounding box offset for both X and Y dimensions.

ROS Topics Subscribed

ROS Topic

Interface

Description

tensor_sub

isaac_ros_tensor_list_interfaces/TensorList

The tensor that represents the inferred aligned bounding boxes.

ROS Topics Published

ROS Topic

Interface

Description

detectnet/detections

vision_msgs/Detection2DArray

Aligned image bounding boxes with detection class.