# YOLOV5-ti-lite Object Detection Models
This repository is based on [ultralytics/yolov5](https://github.com/ultralytics/yolov5). As per the [Official Readme file from Ultralytics](./README_ultralytics.md), YOLOV5 is a family of object detectors with the following major differences from YOLOV3:
* Darknet-csp backbone instead of vanilla Darknet. Reduces complexity by 30%.
* PANet feature extractor instead of FPN.
* Better box-decoding technique
* Genetic algorithm based anchor-box selection.
* Several new augmentation techniques. E.g. Mosaic augmentation
<br/>
## **Official Models from Ultralytics**
|Dataset |Model Name |Input Size |GFLOPS |AP[0.5:0.95]%| AP50%|Notes |
|--------|------------------------------- |-----------|----------|-------------|------|----- |
|COCO |Yolov5s6 |1280x1280 |**69.6** | 43.3 | 61.9 |
|COCO |Yolov5s6_640 |640x640 |**17.4** | 38.9 | 56.8 |(Train@ 1280, val@640) |
|COCO |Yolov5m6 |1280x1280 |**209.6**| 50.5 | 68.7 | |
|COCO |Yolov5m6_640 |640x640 |**52.4** | 45.4 | 63.6 |(Train@ 1280, val@640) |
|COCO |Yolov5l6 |1280x1280 |**470.8**| 53.4 | 71.1 | |
|COCO |Yolov5l6_640 |640x640 |**117.7**| 49.0 | 67.0 |(Train@ 1280, val@640) |
<br/>
## **YOLOV5-ti-lite model definition**
* YOLOV5-ti-lite is a version of YOLOV5 from TI for efficient edge deployment. This naming convention is chosen to avoid conflict with future release of YOLOV5-lite models from Ultralytics.
* Here is a brief description of changes that were made to get yolov5-ti-lite from yolov5:
* YOLOV5 introduces a Focus layer as the very first layer of the network. This replaces the first few heavy convolution layers that are present in YOLOv3. It reduces the complexity of the n/w by 7% and training time by 15%. However, the slice operations in Focus layer are not embedded friendly and hence we replace it with a light-weight convolution layer. Here is a pictorial description of the changes from YOLOv3 to YOLOv5 to YOLOv5-ti-lite:
<p align="left"><img width="800" src="utils/figures/Focus.png"></p>
* SiLU activation is not well-supported in embedded devices. it's not quantization friendly as well because of it's unbounded nature. This was observed for hSwish activation function while [quantizing efficientnet](https://blog.tensorflow.org/2020/03/higher-accuracy-on-vision-models-with-efficientnet-lite.html). Hence, SiLU activation is replaced with ReLU.
* SPP module with maxpool(k=13, s=1), maxpool(k=9,s=1) and maxpool(k=5,s=1) are replaced with various combinations of maxpool(k=3,s=1).Intention is to keep the receptive field and functionality same. This change will cause no difference to the model in floating-point.
* maxpool(k=5, s=1) -> replaced with two maxpool(k=3,s=1)
* maxpool(k=9, s=1) -> replaced with four maxpool(k=3,s=1)
* maxpool(k=13, s=1)-> replaced with six maxpool(k=3,s=1) as shown below:
<p align="left"><img width="800" src="utils/figures/max_pool.png"></p>
* Variable size inference is replaced with fixed size inference as preferred by edge devices. E.g. tflite models are exported with a fixed i/p size.
## **Training and Testing**
* Training any model using this repo will take the above changes by default. Same commands as the official one can be used for training models from scartch. E.g.
```
python train.py --data coco.yaml --cfg yolov5s6.yaml --weights '' --batch-size 64
yolov5m6.yaml
```
* Yolov5-l6-ti-lite model is finetuned for 100 epochs from the official ckpt. To replicate the results for yolov5-l6-ti-lite, download the official pre-trained weights for yolov5-l6 and set the lr to 1e-3 in [hyp.scratch.yaml](data/hyp.scartch.yaml)
```
python train.py --data coco.yaml --cfg yolov5l6.yaml --weights 'yolov5l6.pt' --batch-size 40
```
* Pretrained model checkpoints along with onnx and prototxt files are kept inside [pretrained_models](./pretrained_models).
* Run the following command to replicate the accuracy number on the pretrained checkpoints:
```
python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65 --weights pretrained_models/yolov5s6_640_ti_lite/weights/best.pt
yolov5m6_640_ti_lite
yolov5l6_640_ti_lite
```
<br/>
### **Models trained by TI**
<br/>
<p float="left">
<img width="800" src="utils/figures/mAP_FLOPS.png">
</p>
### **Pre-trained Checkpoints**
|Dataset |Model Name |Input Size |GFLOPS |AP[0.5:0.95]%| AP50%|Notes |
|--------|------------------------------- |-----------|----------|-------------|------|----- |
|COCO |Yolov5s6_ti_lite_640 |640x640 |**17.48** |37.4 | 56.0 | |
|COCO |Yolov5s6_ti_lite_576 |576x576 |**14.16** |36.6 | 55.7 | (Train@ 640, val@576) |
|COCO |Yolov5s6_ti_lite_512 |512x512 |**11.18** |35.3 | 54.3 | (Train@ 640, val@512) |
|COCO |Yolov5s6_ti_lite_448 |448x448 |**8.56** |34.0 | 52.3 | (Train@ 640, val@448) |
|COCO |Yolov5s6_ti_lite_384 |384x384 |**6.30** |32.8 | 51.2 | (Train@ 384, val@384) |
|COCO |Yolov5s6_ti_lite_320 |320x320 |**4.38** |30.3 | 47.6 | (Train@ 384, val@320) |
|COCO |Yolov5m6_ti_lite_640 |640x640 |**52.5** |44.1 | 62.9 | |
|COCO |Yolov5m6_ti_lite_576 |576x576 |**42.52** |43.0 | 61.9 | (Train@ 640, val@576) |
|COCO |Yolov5m6_ti_lite_512 |512x512 |**32.16** |42.0 | 60.5 | (Train@ 640, val@512) |
|COCO |Yolov5l6_ti_lite_640 |640x640 |**117.84** |47.1 | 65.6 | This model is fintuned from the official ckpt for 100 epochs|
There are three models in the [pretrained_models](./pretrained_models). All other results are generated for these model on a different resolution. In order to generate the accuracy number at 512x512, run the following:
```
python test.py --data coco.yaml --img 512 --conf 0.001 --iou 0.65 --weights pretrained_models/yolov5s6_640_ti_lite/weights/best.pt
yolov5m6_640_ti_lite
yolov5l6_640_ti_lite
```
### **ONNX export including detection:**
* Run the following command to export the entire models including the detection part,
```
python export.py --weights pretrained_models/yolov5s6_640_ti_lite/weights/best.pt --img 640 --batch 1 --simplify --export-nms --opset 11 # export at 640x640 with batch size 1
```
* Apart from exporting the complete ONNX model, above script will generate a prototxt file that contains information of the detection layer. This prototxt file is required to deploy the moodel on TI SoC.
## **References**
[1] [Official YOLOV5 repository](https://github.com/ultralytics/yolov5/) <br>
[2] [yolov5-improvements-and-evaluation, Roboflow](https://blog.roboflow.com/yolov5-improvements-and-evaluation/) <br>
[3] [Focus layer in YOLOV5]( https://github.com/ultralytics/yolov5/discussions/3181) <br>
[4] [CrossStagePartial Network](https://github.com/WongKinYiu/CrossStagePartialNetworkss) <br>
[5] Chien-Yao Wang, Hong-Yuan Mark Liao, Yueh-Hua Wu, Ping-Yang Chen, Jun-Wei Hsieh, and I-Hau Yeh. [CSPNet: A new backbone that can enhance learning capability of
cnn](https://arxiv.org/abs/1911.11929). Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop (CVPR Workshop),2020
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
YOLOV5-ti-lite 目标检测模型 该存储库基于 ultralytics/yolov5。根据 Ultralytics 的官方自述文件,YOLOV5 是一个目标检测器系列,与 YOLOV3 有以下主要区别: Darknet-csp 骨干网而不是普通的暗网。将复杂性降低 30%。 PANet 特征提取器而不是 FPN。 更好的盒子解码技术 基于遗传算法的锚框选择。 几种新的增强技术。例如,马赛克增强 YOLOV5-ti-lite 是 TI 的 YOLOV5 版本,用于高效的边缘部署。选择此命名约定是为了避免与 Ultralytics 未来发布的 YOLOV5-lite 型号发生冲突。 以下是从 yolov5 获取 yolov5-ti-lite 所做的更改的简要说明: YOLOV5 引入了一个焦点层作为网络的第一层。这取代了 YOLOv3 中存在的前几个重卷积层。它将 n/w 的复杂性降低了 7%,将训练时间降低了 15%。但是,焦点层中的切片操作不是嵌入友好的,因此我们将其替换为轻量级卷积层。以下是从 YOLOv3 到 YOLOv5 再到 YOLOv5-ti-lite 的变化
资源推荐
资源详情
资源评论
收起资源包目录
YOLOV5-ti-lite 目标检测模型 (162个子文件)
autogen_anchor 192B
autogen_anchor 104B
autogen_anchor 104B
autogen_anchor 98B
Dockerfile 2KB
Dockerfile 821B
.dockerignore 4KB
.gitattributes 75B
.gitignore 4KB
tutorial.ipynb 383KB
bus.jpg 476KB
zidane.jpg 165KB
LICENSE 34KB
yolov5s6_640_ti_lite_metaarch.prototxt.link 172B
yolov5s6_pose_640_ti_lite_metaarch.prototxt.link 171B
best.pt.link 170B
last.pt.link 170B
best.pt.link 170B
last.pt.link 170B
best.pt.link 170B
best.pt.link 170B
last.pt.link 170B
last.pt.link 170B
yolov5s6_pose_640_ti_lite_54p9_82p2.onnx.link 168B
yolov5s6_640_ti_lite_metaarch.prototxt.link 167B
yolov5m6_640_ti_lite_metaarch.prototxt.link 167B
yolov5l6_640_ti_lite_metaarch.prototxt.link 167B
yolov5s6_384_ti_lite_metaarch.prototxt.link 167B
yolov5s6_640_ti_lite_71p53.onnx.link 165B
yolov5l6_640_ti_lite_47p1_65p6.onnx.link 164B
yolov5s6_640_ti_lite_37p4_56p0.onnx.link 164B
yolov5m6_640_ti_lite_44p1_62p9.onnx.link 164B
yolov5s6_384_ti_lite_32p8_51p2.onnx.link 164B
yolov5m6_640_ti_lite_44p1_62p9.pt.link 162B
yolov5s6_384_ti_lite_32p8_51p2.pt.link 162B
yolov5l6_640_ti_lite_47p1_65p6.pt.link 162B
yolov5s6_640_ti_lite_37p4_56p0.pt.link 162B
README_ultralytics.md 11KB
README.md 8KB
CONTRIBUTING.md 5KB
README.md 2KB
bug-report.md 2KB
feature-request.md 738B
question.md 140B
Focus.png 437KB
max_pool.png 298KB
mAP_FLOPS.png 32KB
tidl_meta_arch_yolov5.proto 11KB
tidl_meta_arch_yolov5_pb2.py 55KB
datasets.py 39KB
train.py 30KB
general.py 30KB
wandb_utils.py 19KB
common.py 18KB
plots.py 18KB
val.py 16KB
metrics.py 13KB
yolo.py 13KB
torch_utils.py 12KB
detect.py 12KB
augmentations.py 11KB
export.py 11KB
loss.py 9KB
autoanchor.py 7KB
__init__.py 6KB
pytorch2proto.py 6KB
google_utils.py 6KB
hubconf.py 6KB
experimental.py 5KB
activations.py 4KB
make_link_files.py 3KB
misc.py 2KB
version.py 2KB
resume.py 1KB
restapi.py 1KB
log_dataset.py 934B
sweep.py 866B
example_request.py 299B
__init__.py 0B
__init__.py 0B
__init__.py 0B
__init__.py 0B
__init__.py 0B
make_release.sh 2KB
run_download_modelartifacts.sh 2KB
run_create_modelartifacts_links.sh 2KB
run_create_models_links.sh 2KB
userdata.sh 1KB
get_coco.sh 1023B
mime.sh 780B
get_coco128.sh 705B
download_weights.sh 489B
setup_for_modelmaker.sh 387B
setup.sh 94B
compile_proto.sh 51B
results.txt 50KB
results.txt 50KB
results.txt 50KB
results.txt 50KB
results.txt 50KB
共 162 条
- 1
- 2
资源评论
九五一
- 粉丝: 3646
- 资源: 47
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功