<div align="center">
<p>
<a align="left" href="https://ultralytics.com/yolov5" target="_blank">
<img width="850" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/splash.jpg"></a>
</p>
<br>
<div>
<a href="https://github.com/ultralytics/yolov5/actions"><img src="https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg" alt="CI CPU testing"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv5 Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
<br>
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
<a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
<a href="https://join.slack.com/t/ultralytics/shared_invite/zt-w29ei8bp-jczz7QYUmDtgo6r6KcMIAg"><img src="https://img.shields.io/badge/Slack-Join_Forum-blue.svg?logo=slack" alt="Join Forum"></a>
</div>
<br>
<p>
YOLOv5 ð is a family of object detection architectures and models pretrained on the COCO dataset, and represents <a href="https://ultralytics.com">Ultralytics</a>
open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
</p>
<div align="center">
<a href="https://github.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-github.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://www.linkedin.com/company/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-linkedin.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://twitter.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-twitter.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://www.producthunt.com/@glenn_jocher">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-producthunt.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://youtube.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-youtube.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://www.facebook.com/ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-facebook.png" width="2%"/>
</a>
<img width="2%" />
<a href="https://www.instagram.com/ultralytics/">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-instagram.png" width="2%"/>
</a>
</div>
<!--
<a align="center" href="https://ultralytics.com/yolov5" target="_blank">
<img width="800" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/banner-api.png"></a>
-->
</div>
## <div align="center">Documentation</div>
See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment.
## <div align="center">Quick Start Examples</div>
<details open>
<summary>Install</summary>
Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a
[**Python>=3.7.0**](https://www.python.org/) environment, including
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
```bash
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
```
</details>
<details open>
<summary>Inference</summary>
Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)
. [Models](https://github.com/ultralytics/yolov5/tree/master/models) download automatically from the latest
YOLOv5 [release](https://github.com/ultralytics/yolov5/releases).
```python
import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5l, yolov5x, custom
# Images
img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list
# Inference
results = model(img)
# Results
results.print() # or .show(), .save(), .crop(), .pandas(), etc.
```
</details>
<details>
<summary>Inference with detect.py</summary>
`detect.py` runs inference on a variety of sources, downloading [models](https://github.com/ultralytics/yolov5/tree/master/models) automatically from
the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
```bash
python detect.py --source 0 # webcam
img.jpg # image
vid.mp4 # video
path/ # directory
path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
```
</details>
<details>
<summary>Training</summary>
The commands below reproduce YOLOv5 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh)
results. [Models](https://github.com/ultralytics/yolov5/tree/master/models)
and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) download automatically from the latest
YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are
1/2/4/6/8 days on a V100 GPU ([Multi-GPU](https://github.com/ultralytics/yolov5/issues/475) times faster). Use the
largest `--batch-size` possible, or pass `--batch-size -1` for
YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). Batch sizes shown for V100-16GB.
```bash
python train.py --data coco.yaml --cfg yolov5n.yaml --weights '' --batch-size 128
yolov5s 64
yolov5m 40
yolov5l 24
yolov5x 16
```
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
</details>
<details open>
<summary>Tutorials</summary>
* [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) ð RECOMMENDED
* [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results) âï¸
RECOMMENDED
* [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289) ð NEW
* [Roboflow for Datasets, Labeling, and Active Learning](https://github.com/ultralytics/yolov5/issues/4975) ð NEW
* [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)
* [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) â NEW
* [TFLite, ONNX, CoreML, TensorRT Export](https://github.com/ultralytics/yolov5/issues/251) ð
* [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303)
* [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318)
* [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304)
* [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607)
* [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314) â NEW
* [TensorRT Deployment](https://github.com/wang-xinyu/tensorrtx)
</details>
## <div align="center">Environments</div>
Get started in seconds with our verified environments. Click each icon below for details.
<div ali
yolov5-6.1-ECA.zip
需积分: 0 25 浏览量
更新于2023-04-26
3
收藏 990KB ZIP 举报
YOLOv5是一种高效且流行的实时目标检测框架,由Ultralytics团队开发并维护。6.1版本中引入了ECA(Efficient Channel Attention)通道注意力机制,这是一项旨在提高模型性能的技术,尤其是在有限计算资源下。ECA通过增强特征映射的代表性,帮助模型更好地聚焦于关键信息,从而提升检测精度。
ECA模块主要思想是减少计算复杂性,同时保持注意力机制的有效性。它不依赖于全连接操作或池化层,而是采用局部卷积来计算通道之间的相关性。这使得ECA可以在保持速度的同时,增强模型对特征的理解和利用。
YOLO(You Only Look Once)系列算法以其快速而准确的目标检测能力著称。YOLOv5相比之前的版本,在架构设计、损失函数、数据预处理等方面进行了优化,进一步提高了检测速度和精度。在YOLOv5-6.1中加入ECA,可以视为对模型性能的一种强化,特别是在处理复杂场景和小目标时。
训练过程与官方版本保持一致,这意味着用户依然可以使用`train.py`脚本来训练自定义的数据集。这个脚本会处理数据加载、模型初始化、训练迭代以及模型保存等步骤。`export.py`用于将训练好的模型转换为可部署的形式,`val.py`用于验证模型在验证集上的性能,而`detect.py`则用于模型的实际应用,即对输入图像或视频进行目标检测。
`Dockerfile`提供了构建环境的说明,使得开发者能在隔离的环境中复现研究结果,保证了实验的可重复性。`setup.cfg`和`CONTRIBUTING.md`分别用于项目配置和贡献指南,对于开源项目来说,它们对于新贡献者理解和参与项目至关重要。`LICENSE`文件定义了项目许可协议,通常YOLOv5这样的开源项目会遵循MIT或Apache等许可,允许广泛使用和修改代码。`README.md`则是项目的简介,包含项目目标、如何使用等信息,是了解项目的第一步。
YOLOv5-6.1-ECA结合了ECA通道注意力机制,增强了模型对特征的感知能力,提升了目标检测的效率和准确性。这一版本的发布对于深度学习、特别是目标检测领域的研究者和实践者来说,提供了更高效的工具和更广阔的探索空间。
FriendshipT
- 粉丝: 3w+
- 资源: 82