<a href="https://apps.apple.com/app/id1452689527" target="_blank">
<img src="https://user-images.githubusercontent.com/26833433/98699617-a1595a00-2377-11eb-8145-fc674eb9b1a7.jpg" width="1000"></a>
 
<a href="https://github.com/ultralytics/yolov5/actions"><img src="https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg" alt="CI CPU testing"></a>
This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk.
<p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313216-f0a5e100-9af5-11eb-8445-c682b60da2e3.png"></p>
<details>
<summary>YOLOv5-P5 640 Figure (click to expand)</summary>
<p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313219-f1d70e00-9af5-11eb-9973-52b1f98d321a.png"></p>
</details>
<details>
<summary>Figure Notes (click to expand)</summary>
* GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
* EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
* **Reproduce** by `python test.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
</details>
- **April 11, 2021**: [v5.0 release](https://github.com/ultralytics/yolov5/releases/tag/v5.0): YOLOv5-P6 1280 models, [AWS](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart), [Supervise.ly](https://github.com/ultralytics/yolov5/issues/2518) and [YouTube](https://github.com/ultralytics/yolov5/pull/2752) integrations.
- **January 5, 2021**: [v4.0 release](https://github.com/ultralytics/yolov5/releases/tag/v4.0): nn.SiLU() activations, [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) logging, [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/) integration.
- **August 13, 2020**: [v3.0 release](https://github.com/ultralytics/yolov5/releases/tag/v3.0): nn.Hardswish() activations, data autodownload, native AMP.
- **July 23, 2020**: [v2.0 release](https://github.com/ultralytics/yolov5/releases/tag/v2.0): improved model definition, training and mAP.
## Pretrained Checkpoints
[assets]: https://github.com/ultralytics/yolov5/releases
Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>test<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>V100 (ms) | |params<br><sup>(M) |FLOPS<br><sup>640 (B)
--- |--- |--- |--- |--- |--- |---|--- |---
[YOLOv5s][assets] |640 |36.7 |36.7 |55.4 |**2.0** | |7.3 |17.0
[YOLOv5m][assets] |640 |44.5 |44.5 |63.3 |2.7 | |21.4 |51.3
[YOLOv5l][assets] |640 |48.2 |48.2 |66.9 |3.8 | |47.0 |115.4
[YOLOv5x][assets] |640 |**50.4** |**50.4** |**68.8** |6.1 | |87.7 |218.8
| | | | | | || |
[YOLOv5s6][assets] |1280 |43.3 |43.3 |61.9 |**4.3** | |12.7 |17.4
[YOLOv5m6][assets] |1280 |50.5 |50.5 |68.7 |8.4 | |35.9 |52.4
[YOLOv5l6][assets] |1280 |53.4 |53.4 |71.1 |12.3 | |77.2 |117.7
[YOLOv5x6][assets] |1280 |**54.4** |**54.4** |**72.0** |22.4 | |141.8 |222.9
| | | | | | || |
[YOLOv5x6][assets] TTA |1280 |**55.0** |**55.0** |**72.0** |70.8 | |- |-
<details>
<summary>Table Notes (click to expand)</summary>
* AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy.
* AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python test.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
* Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python test.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45`
* All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
* Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale augmentation. **Reproduce TTA** by `python test.py --data coco.yaml --img 1536 --iou 0.7 --augment`
</details>
## Requirements
Python 3.8 or later with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) dependencies installed, including `torch>=1.7`. To install run:
```bash
$ pip install -r requirements.txt
```
## Tutorials
* [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) ð RECOMMENDED
* [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results) âï¸ RECOMMENDED
* [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289) ð NEW
* [Supervisely Ecosystem](https://github.com/ultralytics/yolov5/issues/2518) ð NEW
* [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)
* [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) â NEW
* [ONNX and TorchScript Export](https://github.com/ultralytics/yolov5/issues/251)
* [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303)
* [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318)
* [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304)
* [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607)
* [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314) â NEW
* [TensorRT Deployment](https://github.com/wang-xinyu/tensorrtx)
## Environments
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
- **Google Colab and Kaggle** notebooks with free GPU: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)
- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
## Inference
`detect.py` runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
```bash
$ python detect.py --source 0 # webcam
file.jpg # image
file.mp4 # video
path/ # directory
path/*.jpg # glob
'https://youtu.be/NUsoVlDFqZg' # YouTube video
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
```
To run inference on example images in `data/images`:
```bash
$ python detect.py
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
基于yolov5-5.0的手势识别实现.0-HandsDetect.zip (93个子文件)
基于yolov5-5.0的手势识别实现_yolov5-5.0-HandsDetect
项目内附说明
如果解压失败请用ara软件解压.txt 42B
yolov5-5.0-HandsDetect-master
.github
dependabot.yml 201B
ISSUE_TEMPLATE
question.md 140B
feature-request.md 737B
bug-report.md 2KB
workflows
rebase.yml 542B
greetings.yml 5KB
codeql-analysis.yml 2KB
ci-testing.yml 3KB
stale.yml 852B
.gitattributes 75B
weights
download_weights.sh 277B
LICENSE 34KB
hubconf.py 5KB
批量修改txt文件内容.py 1KB
utils
__init__.py 0B
google_utils.py 5KB
loss.py 9KB
metrics.py 9KB
aws
__init__.py 0B
userdata.sh 1KB
mime.sh 780B
resume.py 1KB
autoanchor.py 7KB
general.py 25KB
wandb_logging
__init__.py 0B
log_dataset.py 819B
wandb_utils.py 16KB
activations.py 2KB
google_app_engine
Dockerfile 821B
app.yaml 173B
additional_requirements.txt 105B
plots.py 18KB
datasets.py 44KB
torch_utils.py 12KB
VocToYolo_new.py 3KB
myyolodata
yolohands 1B
yolov5m.pt 40.72MB
Dockerfile 2KB
tmp_pxd7lu8 1016KB
将转换后的YOLO数据集分成训练集和测试集.py 2KB
yolov5s.pt 14.11MB
requirements.txt 599B
MyUI
UI.ui 29KB
main.py 8KB
res
暂停.png 261B
background.jpg 181KB
摄像头开.png 3KB
button-on.png 2KB
res.py 2.49MB
Flow-算子-yolov3.png 1KB
实时视频流解析.png 2KB
运行.png 575B
res.qrc 457B
缩小.png 164B
打开.png 2KB
推出.png 907B
兰博.jpg 423KB
yolo.png 2KB
UI.py 24KB
images.qrc 322B
images.py 2.44MB
UI 4KB
models
hub
yolov5x6.yaml 2KB
anchors.yaml 3KB
yolov5-p2.yaml 2KB
yolov5-panet.yaml 1KB
yolov5s6.yaml 2KB
yolov3.yaml 1KB
yolov5-p6.yaml 2KB
yolov5-p7.yaml 2KB
yolov5l6.yaml 2KB
yolov5m6.yaml 2KB
yolov3-spp.yaml 1KB
yolov3-tiny.yaml 1KB
yolov5-fpn.yaml 1KB
yolov5s-transformer.yaml 1KB
__init__.py 0B
export.py 4KB
yolov5m.yaml 1KB
yolov5s.yaml 1KB
yolov5l.yaml 1KB
common.py 17KB
experimental.py 5KB
yolov5x.yaml 1KB
yolo.py 12KB
detect.py 10KB
.gitignore 4KB
train.py 35KB
test.py 17KB
.dockerignore 4KB
README.md 11KB
tutorial.ipynb 385KB
共 93 条
- 1
资源评论
普通网友
- 粉丝: 1127
- 资源: 5292
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 基于python和协同过滤算法的电影推荐系统
- 国际象棋棋子检测3-YOLO(v5至v9)、COCO、CreateML、Darknet、Paligemma、TFRecord数据集合集.rar
- Python毕业设计基于知识图谱的电影推荐系统源码(完整项目代码)
- 基于C++的简易图书管理系统(含exe可执行文件)
- 使用python爬取数据并采用Django搭建系统的前后台,使用Spark进行数据处理并进行电影推荐项目源码
- 商城蛋糕数据库sql源码
- 基于Spark的电影推荐系统源码(毕设)
- NET综合解决工具,windows平台必备
- ZZU 面向对象Java实验报告
- 2024年秋学季-C#课程的信息系统大作业winform
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功