<a href="https://apps.apple.com/app/id1452689527" target="_blank">
<img src="https://user-images.githubusercontent.com/26833433/82944393-f7644d80-9f4f-11ea-8b87-1a5b04f555f1.jpg" width="1000"></a>
 
![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)
This repository represents Ultralytics open-source research into future object detection methods, and incorporates our lessons learned and best practices evolved over training thousands of models on custom client datasets with our previous YOLO repository https://github.com/ultralytics/yolov3. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk.
<img src="https://user-images.githubusercontent.com/26833433/85340570-30360a80-b49b-11ea-87cf-bdf33d53ae15.png" width="1000">** GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 8, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
- **July 23, 2020**: [v2.0 release](https://github.com/ultralytics/yolov5/releases/tag/v2.0): improved model definition, training and mAP.
- **June 22, 2020**: [PANet](https://arxiv.org/abs/1803.01534) updates: new heads, reduced parameters, improved speed and mAP [364fcfd](https://github.com/ultralytics/yolov5/commit/364fcfd7dba53f46edd4f04c037a039c0a287972).
- **June 19, 2020**: [FP16](https://pytorch.org/docs/stable/nn.html#torch.nn.Module.half) as new default for smaller checkpoints and faster inference [d4c6674](https://github.com/ultralytics/yolov5/commit/d4c6674c98e19df4c40e33a777610a18d1961145).
- **June 9, 2020**: [CSP](https://github.com/WongKinYiu/CrossStagePartialNetworks) updates: improved speed, size, and accuracy (credit to @WongKinYiu for CSP).
- **May 27, 2020**: Public release. YOLOv5 models are SOTA among all known YOLO implementations.
- **April 1, 2020**: Start development of future compound-scaled [YOLOv3](https://github.com/ultralytics/yolov3)/[YOLOv4](https://github.com/AlexeyAB/darknet)-based PyTorch models.
## Pretrained Checkpoints
| Model | AP<sup>val</sup> | AP<sup>test</sup> | AP<sub>50</sub> | Speed<sub>GPU</sub> | FPS<sub>GPU</sub> || params | FLOPS |
|---------- |------ |------ |------ | -------- | ------| ------ |------ | :------: |
| [YOLOv5s](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 36.1 | 36.1 | 55.3 | **2.1ms** | **476** || 7.5M | 13.2B
| [YOLOv5m](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 43.5 | 43.5 | 62.5 | 3.0ms | 333 || 21.8M | 39.4B
| [YOLOv5l](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 47.0 | 47.1 | 65.6 | 3.9ms | 256 || 47.8M | 88.1B
| [YOLOv5x](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | **49.0** | **49.0** | **67.4** | 6.1ms | 164 || 89.0M | 166.4B
| | | | | | || |
| [YOLOv3-SPP](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 45.6 | 45.5 | 65.2 | 4.5ms | 222 || 63.0M | 118.0B
** AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results in the table denote val2017 accuracy.
** All AP numbers are for single-model single-scale without ensemble or test-time augmentation. Reproduce by `python test.py --data coco.yaml --img 672 --conf 0.001`
** Speed<sub>GPU</sub> measures end-to-end time per image averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) instance with one V100 GPU, and includes image preprocessing, PyTorch FP16 image inference at --batch-size 32 --img-size 640, postprocessing and NMS. Average NMS time included in this chart is 1-2ms/img. Reproduce by `python test.py --data coco.yaml --img 640 --conf 0.1`
** All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
## Requirements
Python 3.7 or later with all `requirements.txt` dependencies installed, including `torch >= 1.5`. To install run:
```bash
$ pip install -U -r requirements.txt
```
## Tutorials
* [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)
* [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)
* [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)
* [ONNX and TorchScript Export](https://github.com/ultralytics/yolov5/issues/251)
* [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303)
* [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318)
* [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304)
## Environments
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
- **Google Colab Notebook** with free GPU: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
- **Kaggle Notebook** with free GPU: [https://www.kaggle.com/ultralytics/yolov5](https://www.kaggle.com/ultralytics/yolov5)
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
- **Docker Image** https://hub.docker.com/r/ultralytics/yolov5. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) ![Docker Pulls](https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker)
## Inference
Inference can be run on most common media formats. Model [checkpoints](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) are downloaded automatically if available. Results are saved to `./inference/output`.
```bash
$ python detect.py --source 0 # webcam
file.jpg # image
file.mp4 # video
path/ # directory
path/*.jpg # glob
rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa # rtsp stream
http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream
```
To run inference on examples in the `./inference/images` folder:
```bash
$ python detect.py --source ./inference/images/ --weights yolov5s.pt --conf 0.4
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.4, device='', fourcc='mp4v', half=False, img_size=640, iou_thres=0.5, output='inference/output', save_txt=False, source='./inference/images/', view_img=False, weights='yolov5s.pt')
Using CUDA device0 _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', total_memory=16280MB)
Downloading https://drive.google.com/uc?export=download&id=1R5T6rIyy3lLwgFXNms8whc-387H0tMQO as yolov5s.pt... Done (2.6s)
image 1/2 inference/images/bus.jpg: 640x512 3 persons, 1 buss, Done. (0.009s)
image 2/2 inference/images/zidane.jpg: 384x640 2 persons, 2 ties, Done. (0.009s)
Results saved to /content/yolov5/inference/output
```
<img src="https://user-images.githubusercontent.com/26833433/83082816-59e54880-a039-11ea-8abe-ab90cc1ec4b0.jpeg" width="500">
## Training
Download [COCO](https://github.com/ultralytics/yolov5/blob/master/data/get_coco2017.sh), install [Apex](https://github.com/NVIDIA/apex) and run command below. Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices).
```bash
$ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
yolov5m
没有合适的资源?快使用搜索试试~ 我知道了~
资源详情
资源评论
资源推荐
收起资源包目录
吸烟检测数据集(目标检测) (2000个子文件)
README.md 9KB
utils.py 50KB
datasets.py 38KB
train.py 26KB
eval.py 13KB
yolo.py 11KB
detect.py 11KB
openvino_yolov5.py 9KB
torch_utils.py 8KB
experimental.py 5KB
common.py 4KB
google_utils.py 4KB
hubconf.py 3KB
export.py 3KB
from_video.py 3KB
export_test.py 2KB
test2.py 2KB
test.py 2KB
activations.py 2KB
fp16.py 786B
gen_wts.py 534B
main.py 3B
__init__.py 0B
__init__.py 0B
__init__.py 0B
python3.7.sh 54B
torch1.5.sh 48B
train.txt 133KB
results.txt 34KB
test.txt 7KB
requirements.txt 913B
000387.txt 728B
000387.txt 728B
smoke_b000018.txt 471B
smoke_b000018.txt 471B
830_strawberry.txt 450B
830_strawberry.txt 450B
smoke_b000464.txt 414B
smoke_b000464.txt 414B
smoke_b000506.txt 410B
smoke_b000506.txt 410B
smoke_b000559.txt 394B
smoke_b000559.txt 394B
smoke_b001916.txt 331B
smoke_b001916.txt 331B
823_strawberry.txt 329B
823_strawberry.txt 329B
smoke_b001038.txt 327B
smoke_b001038.txt 327B
smoke_a521.txt 326B
smoke_a521.txt 326B
000680.txt 322B
000680.txt 322B
820_strawberry.txt 315B
820_strawberry.txt 315B
smoke12.txt 313B
smoke12.txt 313B
000468.txt 293B
000468.txt 293B
smoke_b000422.txt 287B
smoke_b000422.txt 287B
661_strawberry.txt 281B
661_strawberry.txt 281B
000259.txt 251B
000259.txt 251B
smoke_a187.txt 248B
smoke_a187.txt 248B
000365.txt 247B
000365.txt 247B
smoke111.txt 245B
smoke111.txt 245B
smoke_b001988.txt 244B
000012.txt 244B
smoke_b001988.txt 244B
000012.txt 244B
smoke_b001984.txt 241B
smoke_b001984.txt 241B
smoke_a856.txt 240B
smoke_a856.txt 240B
smoke_b000743.txt 233B
smoke_b000094.txt 233B
smoke_b000743.txt 233B
smoke_b000094.txt 233B
smoke_b002267.txt 231B
smoke_a414.txt 231B
000593.txt 231B
smoke_a414.txt 231B
smoke_b002267.txt 231B
000593.txt 231B
smoke_b002263.txt 217B
smoke_a417.txt 217B
smoke_a417.txt 217B
smoke_b002263.txt 217B
smoke_b000024.txt 216B
smoke_b000024.txt 216B
smoke_a859.txt 215B
smoke_a859.txt 215B
710_strawberry.txt 180B
710_strawberry.txt 180B
000164.txt 170B
共 2000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 20
Niki173
- 粉丝: 41
- 资源: 3
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 斯特林V4发动机 斯特林V4发动机
- 基于C实现的N阶数字正方形 ;N阶数字三角形;N阶数字递减三角形;乘法表
- 基于分水岭算法的图像分割的python源码(课程设计).zip
- 基于Java 实现的二进制十进制之间的相互转换
- Pytorch实现基于卷积神经网络的面部表情识别项目源码+数据集+全部资料(毕业设计).zip
- Pytorch实现基于深度学习卷积神经网络的面部表情识别项目源码+面部表情数据集(人脸面部表情识别项目).zip
- 淘金小游戏助手.apk
- 基于卷积神经网络的人脸面部表情识别项目源码+面部表情数据集+训练好的模型(人脸面部表情识别项目).zip
- 深度学习基于卷积神经网络的人脸面部表情识别项目源码+面部表情数据集+训练好的模型(人脸面部表情识别项目).zip
- 4f76dd1f4d0bea09663e536fd2297540.txt
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论1