============== |package_name| ============== :ir_github:` ` Quickstart ---------- .. note:: This quickstart demonstrates |package_name| in an image segmentation application. Therefore, this demo features an encoder and decoder node to perform pre-processing and post-processing respectively. In reality, the raw inference result is simply a tensor. To use the packages in other useful contexts, please refer :doc:`here `. 1. Set up your development environment by following the instructions :doc:`here `. 2. Clone ``isaac_ros_common``, ``isaac_ros_image_segmentation``, and this repository under ``${ISAAC_ROS_WS}/src``. .. code:: bash cd ${ISAAC_ROS_WS}/src .. code:: bash git clone :ir_clone:`` .. code:: bash git clone :ir_clone:`` .. code:: bash git clone :ir_clone:`` 3. Launch the Docker container using the ``run_dev.sh`` script: .. code:: bash cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh 4. Install this package's dependencies, along with an additional package used for this quickstart. :ir_apt: .. code:: bash sudo apt-get install -y ros-humble-isaac-ros-triton ros-humble-isaac-ros-unet 5. This example uses ``PeopleSemSegNet ShuffleSeg``. Download the ETLT file and the ``int8`` inference mode cache file: .. code:: bash mkdir -p /tmp/models/peoplesemsegnet_shuffleseg/1 && \ cd /tmp/models/peoplesemsegnet_shuffleseg && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_shuffleseg_unet_v1.0/files/peoplesemsegnet_shuffleseg_etlt.etlt && \ wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_shuffleseg_unet_v1.0/files/peoplesemsegnet_shuffleseg_cache.txt 6. Convert the ETLT file to a TensorRT plan file: .. code:: bash /opt/nvidia/tao/tao-converter -k tlt_encode -d 3,544,960 -p input_2:0,1x3x544x960,1x3x544x960,1x3x544x960 -t int8 -c peoplesemsegnet_shuffleseg_cache.txt -e /tmp/models/peoplesemsegnet_shuffleseg/1/model.plan -o argmax_1 peoplesemsegnet_shuffleseg_etlt.etlt 7. Create a file named ``/tmp/models/peoplesemsegnet_shuffleseg/config.pbtxt`` by copying the sample Triton config file: .. code:: bash cp /workspaces/isaac_ros-dev/src/isaac_ros_dnn_inference/resources/peoplesemsegnet_shuffleseg_config.pbtxt /tmp/models/peoplesemsegnet_shuffleseg/config.pbtxt 8. Run the following launch files to spin up a demo of this package: Launch Triton: .. code:: bash ros2 launch isaac_ros_unet isaac_ros_unet_triton.launch.py model_name:=peoplesemsegnet_shuffleseg model_repository_paths:=['/tmp/models'] input_binding_names:=['input_2:0'] output_binding_names:=['argmax_1'] network_output_type:='argmax' input_image_width:=1200 input_image_height:=632 In **another** terminal, enter the Docker container: .. code:: bash cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh Then, play the ROS bag from ``isaac_ros_image_segmentation``: .. code:: bash ros2 bag play -l src/isaac_ros_image_segmentation/resources/rosbags/unet_sample_data/ 9. Visualize and validate the output of the package: In a **third** terminal, enter the Docker container: .. code:: bash cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh Then echo the inference result: .. code:: bash ros2 topic echo /tensor_sub The expected result should look like this: .. code:: bash header: stamp: sec: