# EfficientDet Object Detection in TensorRT

These scripts help with conversion and execution of [Google EfficientDet](https://arxiv.org/abs/1911.09070) models with [NVIDIA TensorRT](https://developer.nvidia.com/tensorrt). This process is compatible with models trained through either Google AutoML or the TensorFlow Object Detection API.
## Contents
- [Changelog](#changelog)
- [Setup](#setup)
- [Model Conversion](#model-conversion)
* [TensorFlow Saved Model](#tensorflow-saved-model)
* [Create ONNX Graph](#create-onnx-graph)
* [Build TensorRT Engine](#build-tensorrt-engine)
- [Inference](#inference)
* [Inference in Python](#inference-in-python)
* [Evaluate mAP Metric](#evaluate-map-metric)
* [TF vs TRT Comparison](#tf-vs-trt-comparison)
## Changelog
- January 2022:
- Added support for EfficientDet Lite and AdvProp models.
- Added dynamic batch support.
- Added mixed precision engine builder.
- July 2021:
- Initial release.
## Setup
We recommend running these scripts on an environment with TensorRT >= 8.0.1 and TensorFlow >= 2.5.
Install TensorRT as per the [TensorRT Install Guide](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html). You will need to make sure the Python bindings for TensorRT are also installed correctly, these are available by installing the `python3-libnvinfer` and `python3-libnvinfer-dev` packages on your TensorRT download.
To simplify TensorRT and TensorFlow installation, use an [NGC TensorFlow Docker Image](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow), such as:
```bash
docker pull nvcr.io/nvidia/tensorflow:22.01-tf1-py3
```
Install all dependencies listed in `requirements.txt`:
```bash
pip3 install -r requirements.txt
```
On Jetson Nano, you will need nvcc in the `PATH` for installing pycuda:
```bash
export PATH=${PATH}:/usr/local/cuda/bin/
```
You will also need the latest `onnx_graphsurgeon` python module. If not already installed by TensorRT, you can install it manually by running:
```bash
pip3 install onnx-graphsurgeon --index-url https://pypi.ngc.nvidia.com
```
**NOTE:** Please make sure that the `onnx-graphsurgeon` module installed by pip is version >= 0.3.9.
Finally, you may want to clone the EfficientDet code from the [AutoML Repository](https://github.com/google/automl) to use some helper utilities from it. This exporter has been tested with commit [0b0ba5e](https://github.com/google/automl/tree/0b0ba5ebd0860edd939465fc4152da4ff9f79b44/efficientdet) from December 2021, so it may be a good idea to checkout the repository at that specific commit to avoid possible future incompatibilities:
```bash
git clone https://github.com/google/automl
cd automl
git checkout 0b0ba5e
```
## Model Conversion
The workflow to convert an EfficientDet model is basically TensorFlow → ONNX → TensorRT, and so parts of this process require TensorFlow to be installed. If you are performing this conversion to run inference on the edge, such as for NVIDIA Jetson devices, it might be easier to do the ONNX conversion on a PC first.
### TensorFlow Saved Model
The starting point of conversion is a TensorFlow saved model. This can be exported from your own trained models, or you can download a pre-trained model. This conversion script is compatible with three types of models:
1. EfficientDet models trained with the [AutoML](https://github.com/google/automl/tree/master/efficientdet) framework. Compatible with all "d0-7", "lite0-4" and "AdvProp" variations.
2. EfficientDet models trained with the [TensorFlow Object Detection](https://github.com/tensorflow/models/tree/master/research/object_detection) API (TFOD).
3. EfficientDet models pre-trained on COCO and downloaded from [TFHub](https://tfhub.dev/s?network-architecture=efficientdet).
#### 1. AutoML Models
If you are training your own model, you will need the training checkpoint. You can also download a pre-trained checkpoint from the "ckpt" links on the [AutoML Repository](https://github.com/google/automl/tree/master/efficientdet) README file, such as [this](https://storage.googleapis.com/cloud-tpu-checkpoints/efficientdet/coco2/efficientdet-d0.tar.gz).
This converter is compatible with all *efficientdet-d0* through *efficientdet-d7x*, and *efficientdet-lite0* through *efficientdet-lite4* model variations. This converter also works with the [AdvProp](https://github.com/google/automl/blob/master/efficientdet/Det-AdvProp.md) models. However, AdvProp models are trained with the `scale_range` hparam, which changes the expected input image value range, so you will need to adjust the preprocessor argument when creating the ONNX graph. More details on the corresponding section below.
The checkpoint directory should have a file structure such as this:
```
efficientdet-d0
├── model.data-00000-of-00001
├── model.index
└── model.meta
```
To export a saved model from here, clone and install the [AutoML](https://github.com/google/automl) repository, and run:
```bash
cd /path/to/automl/efficientdet
python3 model_inspect.py \
--runmode saved_model \
--model_name efficientdet-d0 \
--ckpt_path /path/to/efficientdet-d0 \
--saved_model_dir /path/to/saved_model
```
Where the `--model_name` argument is the network name corresponding to this checkpoint, usually between `efficientdet-d0` and `efficientdet-d7x`. The `--ckpt_path` points to the directory holding the checkpoint as described above. The TF saved model will be exported to the path given by `--saved_model_dir`.
> **Custom Image Size:** If your application requires inference at a different image resolution than the training input size, you can re-export the model for the exact size you require. To do so, export a saved model from checkpoint as shown above, but add an extra argument as: `--hparams 'image_size=1920x1280'`
#### 2. TFOD Models
You can download one of the pre-trained TFOD models from the [TF2 Detection Model Zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md), such as:
```bash
wget http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d0_coco17_tpu-32.tar.gz
```
When extracted, this package holds a directory named `saved_model` which holds the saved model ready for conversion.
However, if you are working with your own trained EfficientDet model from the TensorFlow Object Detection API, or if you need to re-export the saved model, you can do so from the training checkpoint. The downloaded package above also contains a pre-trained checkpoint. The structure is similar to this:
```
efficientdet_d0_coco17_tpu-32
├── checkpoint
│ ├── ckpt-0.data-00000-of-00001
│ └── ckpt-0.index
├── pipeline.config
└── saved_model
└── saved_model.pb
```
To (re-)export a saved model from here, clone the TFOD API repository from [TF Models Repository](https://github.com/tensorflow/models) repository, and install it following the [instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2.md#installation). Then run:
```bash
cd /path/to/models/research/object_detection
python3 exporter_main_v2.py \
--input_type image_tensor \
--trained_checkpoint_dir /path/to/efficientdet_d0_coco17_tpu-32/checkpoint \
--pipeline_config_path /path/to/efficientdet_d0_coco17_tpu-32/pipeline.config \
--output_directory /path/to/export
```
Where `--trained_checkpoint_dir` and `--pipeline_config_path` point to the corresponding paths in the training checkpoint. On the path pointed by `--output_directory` you will then find the newly created saved model in a directory aptly named `saved_model`.
**NOTE:** TFOD EfficientDet models will have a slightly reduced
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论




















收起资源包目录





































































































共 342 条
- 1
- 2
- 3
- 4
资源评论


shaying526
- 粉丝: 4
- 资源: 19
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


会员权益专享
安全验证
文档复制为VIP权益,开通VIP直接复制
