# Official YOLOv7
Implementation of paper - [YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2207.02696)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/yolov7-trainable-bag-of-freebies-sets-new/real-time-object-detection-on-coco)](https://paperswithcode.com/sota/real-time-object-detection-on-coco?p=yolov7-trainable-bag-of-freebies-sets-new)
[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/akhaliq/yolov7)
<a href="https://colab.research.google.com/gist/AlexeyAB/b769f5795e65fdab80086f6cb7940dae/yolov7detection.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
[![arxiv.org](http://img.shields.io/badge/cs.CV-arXiv%3A2207.02696-B31B1B.svg)](https://arxiv.org/abs/2207.02696)
<div align="center">
<a href="./">
<img src="./figure/performance.png" width="79%"/>
</a>
</div>
## Web Demo
- Integrated into [Huggingface Spaces ����](https://huggingface.co/spaces/akhaliq/yolov7) using Gradio. Try out the Web Demo [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/akhaliq/yolov7)
## Performance
MS COCO
| Model | Test Size | AP<sup>test</sup> | AP<sub>50</sub><sup>test</sup> | AP<sub>75</sub><sup>test</sup> | batch 1 fps | batch 32 average time |
| :-- | :-: | :-: | :-: | :-: | :-: | :-: |
| [**YOLOv7**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt) | 640 | **51.4%** | **69.7%** | **55.9%** | 161 *fps* | 2.8 *ms* |
| [**YOLOv7-X**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7x.pt) | 640 | **53.1%** | **71.2%** | **57.8%** | 114 *fps* | 4.3 *ms* |
| | | | | | | |
| [**YOLOv7-W6**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6.pt) | 1280 | **54.9%** | **72.6%** | **60.1%** | 84 *fps* | 7.6 *ms* |
| [**YOLOv7-E6**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6.pt) | 1280 | **56.0%** | **73.5%** | **61.2%** | 56 *fps* | 12.3 *ms* |
| [**YOLOv7-D6**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-d6.pt) | 1280 | **56.6%** | **74.0%** | **61.8%** | 44 *fps* | 15.0 *ms* |
| [**YOLOv7-E6E**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6e.pt) | 1280 | **56.8%** | **74.4%** | **62.1%** | 36 *fps* | 18.7 *ms* |
## Installation
Docker environment (recommended)
<details><summary> <b>Expand</b> </summary>
``` shell
# create the docker container, you can change the share memory size if you have more.
nvidia-docker run --name yolov7 -it -v your_coco_path/:/coco/ -v your_code_path/:/yolov7 --shm-size=64g nvcr.io/nvidia/pytorch:21.08-py3
# apt install required packages
apt update
apt install -y zip htop screen libgl1-mesa-glx
# pip install required packages
pip install seaborn thop
# go to code folder
cd /yolov7
```
</details>
## Testing
[`yolov7.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt) [`yolov7x.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7x.pt) [`yolov7-w6.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6.pt) [`yolov7-e6.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6.pt) [`yolov7-d6.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-d6.pt) [`yolov7-e6e.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6e.pt)
``` shell
python test.py --data data/coco.yaml --img 640 --batch 32 --conf 0.001 --iou 0.65 --device 0 --weights yolov7.pt --name yolov7_640_val
```
You will get the results:
```
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.51206
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.69730
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.55521
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.35247
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.55937
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.66693
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.38453
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.63765
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.68772
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.53766
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.73549
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.83868
```
To measure accuracy, download [COCO-annotations for Pycocotools](http://images.cocodataset.org/annotations/annotations_trainval2017.zip) to the `./coco/annotations/instances_val2017.json`
## Training
Data preparation
``` shell
bash scripts/get_coco.sh
```
* Download MS COCO dataset images ([train](http://images.cocodataset.org/zips/train2017.zip), [val](http://images.cocodataset.org/zips/val2017.zip), [test](http://images.cocodataset.org/zips/test2017.zip)) and [labels](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/coco2017labels-segments.zip). If you have previously used a different version of YOLO, we strongly recommend that you delete `train2017.cache` and `val2017.cache` files, and redownload [labels](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/coco2017labels-segments.zip)
Single GPU training
``` shell
# train p5 models
python train.py --workers 8 --device 0 --batch-size 32 --data data/coco.yaml --img 640 640 --cfg cfg/training/yolov7.yaml --weights '' --name yolov7 --hyp data/hyp.scratch.p5.yaml
# train p6 models
python train_aux.py --workers 8 --device 0 --batch-size 16 --data data/coco.yaml --img 1280 1280 --cfg cfg/training/yolov7-w6.yaml --weights '' --name yolov7-w6 --hyp data/hyp.scratch.p6.yaml
```
Multiple GPU training
``` shell
# train p5 models
python -m torch.distributed.launch --nproc_per_node 4 --master_port 9527 train.py --workers 8 --device 0,1,2,3 --sync-bn --batch-size 128 --data data/coco.yaml --img 640 640 --cfg cfg/training/yolov7.yaml --weights '' --name yolov7 --hyp data/hyp.scratch.p5.yaml
# train p6 models
python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train_aux.py --workers 8 --device 0,1,2,3,4,5,6,7 --sync-bn --batch-size 128 --data data/coco.yaml --img 1280 1280 --cfg cfg/training/yolov7-w6.yaml --weights '' --name yolov7-w6 --hyp data/hyp.scratch.p6.yaml
```
## Transfer learning
[`yolov7_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7_training.pt) [`yolov7x_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7x_training.pt) [`yolov7-w6_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6_training.pt) [`yolov7-e6_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6_training.pt) [`yolov7-d6_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-d6_training.pt) [`yolov7-e6e_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6e_training.pt)
Single GPU finetuning for custom dataset
``` shell
# finetune p5 models
python train.py --workers 8 --device 0 --batch-size 32 --data data/custom.yaml --img 640 640 --cfg cfg/training/yolov7-custom.yaml --weights 'yolov7_training.pt' --name yolov7-custom --hyp data/hyp.scratch.custom.yaml
# finetune p6 models
python train_aux.py --workers 8 --device 0 --batch-size 16 --data data/custom.yaml --img 1280 1280 --cfg cfg/training/yolov7-w6-custom.yaml --weights 'yolov7-w6_training.pt' --name yolov7-w6-custom --hyp data/hyp.scratch.custom.yaml
```
## Re-parameterization
See [reparameterization.ipynb](tools/reparameterization.ipynb)
## Inference
On video:
``` shell
python detect.py --weights
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
兼容PyTorch:YOLOv7的PyTorch版本是为PyTorch深度学习框架设计的,这意味着所有的模型架构、损失函数和训练过程都是用PyTorch的API实现的。 模块化设计:PyTorch版本的YOLOv7可能采用模块化设计,使得模型的不同部分(如 backbone、neck、head)可以灵活组合和替换。 预训练模型:可能会提供在大型数据集(如COCO或ImageNet)上预训练的权重,以便于进行迁移学习。 实时性能:YOLOv7旨在提供实时目标检测性能,PyTorch版本也应保持这一特性。 多尺度预测:YOLOv7可能在不同的尺度上进行目标检测,以捕捉不同大小的对象。 数据增强:在训练过程中可能使用多种数据增强技术,以提高模型的泛化能力。 损失函数:YOLOv7的PyTorch实现可能包括自定义的损失函数,用于边界框预测和类别预测。 非极大值抑制(NMS):后处理步骤中可能包括NMS,以合并重叠的检测框并提高最终结果的准确性。 跨平台兼容性:由于PyTorch的跨平台特性,YOLOv7的PyTorch版本可以在多种操作系统上运行。
资源推荐
资源详情
资源评论
收起资源包目录
目标检测模型-YOLOv7-Pytorch版本 (129个子文件)
Dockerfile 821B
.gitignore 4KB
YOLOv7-Dynamic-Batch-TENSORRT.ipynb 12.01MB
YOLOv7-Dynamic-Batch-ONNXRUNTIME.ipynb 5.66MB
compare_YOLOv7_vs_YOLOv5m6_half.ipynb 3.75MB
compare_YOLOv7e6_vs_YOLOv5x6_half.ipynb 3.74MB
compare_YOLOv7e6_vs_YOLOv5x6.ipynb 3.74MB
compare_YOLOv7_vs_YOLOv5m6.ipynb 3.73MB
compare_YOLOv7_vs_YOLOv5s6.ipynb 3.73MB
YOLOv7trt.ipynb 1.69MB
YOLOv7onnx.ipynb 1.47MB
YOLOv7CoreML.ipynb 873KB
visualization.ipynb 482KB
instance.ipynb 477KB
keypoint.ipynb 465KB
reparameterization.ipynb 31KB
bus.jpg 476KB
dog_result.jpg 180KB
zidane.jpg 165KB
dog.jpg 160KB
horses_prediction.jpg 151KB
image2.jpg 140KB
horses.jpg 130KB
image3.jpg 115KB
image1.jpg 79KB
tennis.jpg 7KB
tennis_semantic.jpg 4KB
LICENSE.md 34KB
README.md 13KB
README.md 7KB
yolov7.pdf 5.85MB
pose.png 347KB
performance.png 165KB
mask.png 102KB
tennis_caption.png 19KB
tennis_panoptic.png 8KB
common.py 82KB
loss.py 73KB
datasets.py 55KB
yolo.py 39KB
train.py 37KB
train_aux.py 37KB
general.py 36KB
plots.py 20KB
test.py 17KB
wandb_utils.py 16KB
torch_utils.py 15KB
client.py 14KB
experimental.py 11KB
detect.py 9KB
metrics.py 9KB
export.py 9KB
autoanchor.py 7KB
add_nms.py 5KB
google_utils.py 5KB
hubconf.py 3KB
render.py 3KB
xml2txt.py 2KB
activations.py 2KB
processing.py 2KB
labels.py 1KB
split_data.py 1KB
resume.py 1KB
boundingbox.py 960B
log_dataset.py 815B
__init__.py 6B
__init__.py 6B
__init__.py 6B
__init__.py 5B
common.cpython-38.pyc 70KB
datasets.cpython-38.pyc 40KB
loss.cpython-38.pyc 38KB
yolo.cpython-38.pyc 28KB
general.cpython-38.pyc 27KB
plots.cpython-38.pyc 17KB
torch_utils.cpython-38.pyc 13KB
test.cpython-38.pyc 11KB
wandb_utils.cpython-38.pyc 11KB
experimental.cpython-38.pyc 10KB
metrics.cpython-38.pyc 8KB
autoanchor.cpython-38.pyc 6KB
add_nms.cpython-38.pyc 4KB
activations.cpython-38.pyc 3KB
google_utils.cpython-38.pyc 3KB
__init__.cpython-38.pyc 142B
__init__.cpython-38.pyc 129B
__init__.cpython-38.pyc 128B
userdata.sh 1KB
get_coco.sh 820B
mime.sh 780B
yolov7-w6.txt 54.82MB
yolov7.txt 13.79MB
yolov7-tiny.txt 6.94MB
commad.txt 1KB
requirements.txt 950B
additional_requirements.txt 105B
yolov7-e6e.yaml 9KB
yolov7-e6e.yaml 9KB
yolov7-d6.yaml 6KB
yolov7-d6.yaml 6KB
共 129 条
- 1
- 2
资源评论
张飞飞飞飞飞
- 粉丝: 2937
- 资源: 28
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 基于Vue+NodeJS的学生社团管理系统(前后端代码)
- 基于SSM+JSP的快递管理系统(前后端代码)
- 全球火点数据-modis-2015-2023年
- YOLOv8完整网络结构图详细visio
- LCD1602电子时钟程序
- 西北太平洋热带气旋【灾害风险统计】及【登陆我国次数评估】数据集-1980-2023
- 全球干旱数据集【自校准帕尔默干旱程度指数scPDSI】-190101-202312-0.5x0.5
- 基于Python实现的VAE(变分自编码器)训练算法源代码+使用说明
- 全球干旱数据集【标准化降水蒸发指数SPEI-12】-190101-202312-0.5x0.5
- C语言小游戏-五子棋-详细代码可运行
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功