Table Of Contents
- Description
- How does this sample work?
- Prerequisites
- Running the sample
- Additional resources
- License
- Changelog
- Known issues
This sample, simple_progress_reporter, is a Python sample which uses TensorRT and its included ONNX parser, to perform inference with ResNet-50 models saved in ONNX format. It displays animated progress bars while TensorRT builds the engine.
This sample demonstrates how to build an engine from an ONNX model file using the open-source ONNX parser and then run inference. The ONNX parser can be used with any framework that supports the ONNX format (typically .onnx
files). An IProgressMonitor
object receives updates on the progress of the build, and displays them as ASCII progress bars on stdout.
- Install the dependencies for Python.
pip3 install -r requirements.txt
-
Run the sample from a terminal to create a TensorRT inference engine and run inference:
python3 simple_progress_monitor.py
Note: If the TensorRT sample data is not installed in the default location, for example
/usr/src/tensorrt/data/
, thedata
directory must be specified. For example:python3 simple_progress_monitor.py -d /path/to/my/data/
Note: Do not redirect the output of this script to a file or pipe.
-
Verify that the sample ran successfully. If the sample runs successfully you should see output similar to the following:
Correctly recognized data/samples/resnet50/reflex_camera.jpeg as reflex camera
To see the full list of available options and their descriptions, use the -h
or --help
command line option. For example:
usage: simple_progress_monitor.py [-h] [-d DATADIR]
Runs a ResNet50 network with a TensorRT inference engine. Displays intermediate build progress.
optional arguments:
-h, --help show this help message and exit
-d DATADIR, --datadir DATADIR
Location of the TensorRT sample data directory.
(default: /usr/src/tensorrt/data)
The following resources provide a deeper understanding about importing a model into TensorRT using Python:
ResNet-50
Parsers
Documentation
- Introduction To NVIDIA’s TensorRT Samples
- Working With TensorRT Using The Python API
- Importing A Model Using A Parser In Python
- NVIDIA’s TensorRT Documentation Library
Terminal Escape Sequences
- Linux: XTerm Control Sequences
- Windows: Console Virtual Terminal Sequences
For terms and conditions for use, reproduction, and distribution, see the TensorRT Software License Agreement documentation.
August 2023 Removed support for Python versions < 3.8.
June 2023
This README.md
file was created and reviewed.
There are no known issues in this sample