馃摎 This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 馃殌. UPDATED 29 September 2021.
- [About Weights & Biases](#about-weights-&-biases)
- [First-Time Setup](#first-time-setup)
- [Viewing runs](#viewing-runs)
- [Disabling wandb](#disabling-wandb)
- [Advanced Usage: Dataset Versioning and Evaluation](#advanced-usage)
- [Reports: Share your work with the world!](#reports)
## About Weights & Biases
Think of [W&B](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) like GitHub for machine learning models. With a few lines of code, save everything you need to debug, compare and reproduce your models 鈥� architecture, hyperparameters, git commits, model weights, GPU usage, and even datasets and predictions.
Used by top researchers including teams at OpenAI, Lyft, Github, and MILA, W&B is part of the new standard of best practices for machine learning. How W&B can help you optimize your machine learning workflows:
- [Debug](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Free-2) model performance in real time
- [GPU usage](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#System-4) visualized automatically
- [Custom charts](https://wandb.ai/wandb/customizable-charts/reports/Powerful-Custom-Charts-To-Debug-Model-Peformance--VmlldzoyNzY4ODI) for powerful, extensible visualization
- [Share insights](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Share-8) interactively with collaborators
- [Optimize hyperparameters](https://docs.wandb.com/sweeps) efficiently
- [Track](https://docs.wandb.com/artifacts) datasets, pipelines, and production models
## First-Time Setup
<details open>
<summary> Toggle Details </summary>
When you first train, W&B will prompt you to create a new account and will generate an **API key** for you. If you are an existing user you can retrieve your key from https://wandb.ai/authorize. This key is used to tell W&B where to log your data. You only need to supply your key once, and then it is remembered on the same device.
W&B will create a cloud **project** (default is 'YOLOv5') for your training runs, and each new training run will be provided a unique run **name** within that project as project/name. You can also manually set your project and run name as:
```shell
$ python train.py --project ... --name ...
```
YOLOv5 notebook example: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
<img width="960" alt="Screen Shot 2021-09-29 at 10 23 13 PM" src="https://user-images.githubusercontent.com/26833433/135392431-1ab7920a-c49d-450a-b0b0-0c86ec86100e.png">
</details>
## Viewing Runs
<details open>
<summary> Toggle Details </summary>
Run information streams from your environment to the W&B cloud console as you train. This allows you to monitor and even cancel runs in <b>realtime</b> . All important information is logged:
- Training & Validation losses
- Metrics: Precision, Recall, mAP@0.5, mAP@0.5:0.95
- Learning Rate over time
- A bounding box debugging panel, showing the training progress over time
- GPU: Type, **GPU Utilization**, power, temperature, **CUDA memory usage**
- System: Disk I/0, CPU utilization, RAM memory usage
- Your trained model as W&B Artifact
- Environment: OS and Python types, Git repository and state, **training command**
<p align="center"><img width="900" alt="Weights & Biases dashboard" src="https://user-images.githubusercontent.com/26833433/135390767-c28b050f-8455-4004-adb0-3b730386e2b2.png"></p>
</details>
## Disabling wandb
- training after running `wandb disabled` inside that directory creates no wandb run
![Screenshot (84)](https://user-images.githubusercontent.com/15766192/143441777-c780bdd7-7cb4-4404-9559-b4316030a985.png)
- To enable wandb again, run `wandb online`
![Screenshot (85)](https://user-images.githubusercontent.com/15766192/143441866-7191b2cb-22f0-4e0f-ae64-2dc47dc13078.png)
## Advanced Usage
You can leverage W&B artifacts and Tables integration to easily visualize and manage your datasets, models and training evaluations. Here are some quick examples to get you started.
<details open>
<h3> 1: Train and Log Evaluation simultaneousy </h3>
This is an extension of the previous section, but it'll also training after uploading the dataset. <b> This also evaluation Table</b>
Evaluation table compares your predictions and ground truths across the validation set for each epoch. It uses the references to the already uploaded datasets,
so no images will be uploaded from your system more than once.
<details open>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python train.py --upload_data val</code>
![Screenshot from 2021-11-21 17-40-06](https://user-images.githubusercontent.com/15766192/142761183-c1696d8c-3f38-45ab-991a-bb0dfd98ae7d.png)
</details>
<h3>2. Visualize and Version Datasets</h3>
Log, visualize, dynamically query, and understand your data with <a href='https://docs.wandb.ai/guides/data-vis/tables'>W&B Tables</a>. You can use the following command to log your dataset as a W&B Table. This will generate a <code>{dataset}_wandb.yaml</code> file which can be used to train from dataset artifact.
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python utils/logger/wandb/log_dataset.py --project ... --name ... --data .. </code>
![Screenshot (64)](https://user-images.githubusercontent.com/15766192/128486078-d8433890-98a3-4d12-8986-b6c0e3fc64b9.png)
</details>
<h3> 3: Train using dataset artifact </h3>
When you upload a dataset as described in the first section, you get a new config file with an added `_wandb` to its name. This file contains the information that
can be used to train a model directly from the dataset artifact. <b> This also logs evaluation </b>
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python train.py --data {data}_wandb.yaml </code>
![Screenshot (72)](https://user-images.githubusercontent.com/15766192/128979739-4cf63aeb-a76f-483f-8861-1c0100b938a5.png)
</details>
<h3> 4: Save model checkpoints as artifacts </h3>
To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base cammand, where `n` represents checkpoint interval.
You can also log both the dataset and model checkpoints simultaneously. If not passed, only the final model will be logged
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python train.py --save_period 1 </code>
![Screenshot (68)](https://user-images.githubusercontent.com/15766192/128726138-ec6c1f60-639d-437d-b4ee-3acd9de47ef3.png)
</details>
</details>
<h3> 5: Resume runs from checkpoint artifacts. </h3>
Any run can be resumed using artifacts if the <code>--resume</code> argument starts with聽<code>wandb-artifact://</code>聽prefix followed by the run path, i.e,聽<code>wandb-artifact://username/project/runid </code>. This doesn't require the model checkpoint to be present on the local system.
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python train.py --resume wandb-artifact://{run_path} </code>
![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png)
</details>
<h3> 6: Resume runs from dataset artifact & checkpoint artifacts. </h3>
<b> Local dataset or model checkpoints are not required. This can be used to resume runs directly on a different device </b>
The syntax is same as the previous section, but you'll need to lof both the dataset and model checkpoints as artifacts, i.e, set bot <code>--upload_dataset</code> or
train fro
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
使用说明在zip压缩包 README 文件中,请仔细阅读。 技术特性 深度学习 YOLOv5:高效、准确的目标检测算法,实时识别检测图像和视频中的各种对象 PyTorch:机器学习框架,以动态计算图为基础,具有灵活性和易用性 OpenCV:计算机视觉库,提供了丰富的图像和视频处理功能 前端 Vue3:采用 Vue3 + script setup 最新的 Vue3 组合式 API Element Plus:Element UI 的 Vue3 版本 Pinia: 类型安全、可预测的状态管理库 Vite:新型前端构建工具 Vue Router:路由 TypeScript:JavaScript 语言的超集 PNPM:更快速的,节省磁盘空间的包管理工具 Scss:和 Element Plus 保持一致 CSS 变量:主要控制项目的布局和颜色 ESlint:代码校验 Prettier:代码格式化 Axios:发送网络请求 UnoCSS:具有高性能且极具灵活性的即时原子化 CSS 引擎 注释:各个配置项都写有尽可能详细的注释 兼容移动端: 布局兼容移动端页面分辨率 后端 MySQL 8:关系型数据库
资源推荐
资源详情
资源评论
收起资源包目录
基于深度学习算法的垃圾检测系统(YOLOv5 + Flask + Vue) (391个子文件)
hook.code-snippets 465B
vue.code-snippets 309B
variables.css 1KB
app-loading.css 1KB
element-plus.css 974B
style.css 423B
.env.development 466B
Dockerfile 2KB
Dockerfile 821B
Dockerfile-arm64 2KB
Dockerfile-cpu 2KB
.dockerignore 4KB
.editorconfig 217B
.eslintignore 74B
.gitattributes 75B
.gitignore 4KB
.gitignore 403B
.gitignore 176B
index.html 2KB
index.html 544B
favicon_backup.ico 66KB
favicon.ico 15KB
yolov5_garbage_detect.iml 336B
alembic.ini 857B
batch_1_000029.jpg 526KB
batch_1_000029.jpg 526KB
bus.jpg 476KB
bus.jpg 476KB
zidane.jpg 165KB
zidane.jpg 165KB
.eslintrc.js 2KB
prettier.config.js 651B
package.json 3KB
tsconfig.json 1KB
settings.json 707B
extensions.json 268B
LICENSE 34KB
LICENSE 1KB
script.py.mako 494B
README.md 11KB
README.md 10KB
README.md 8KB
README.md 2KB
image-20230403231026292.png 6.01MB
image-20230403230922762.png 2.22MB
logo-text-2_backup.png 407KB
image-20230403230502549.png 389KB
image-20230403230502551.png 378KB
logo-text-1_backup.png 373KB
image-20230403230502552.png 370KB
image-20230403230502550.png 366KB
image-20230403231338333.png 339KB
image-20230403231323497.png 338KB
image-20230403231402480.png 337KB
image-20230403231254518.png 335KB
image-20230403230502548.png 317KB
image-20230403231402492.png 223KB
image-20230403230444431.png 162KB
image-20230403231206301.png 161KB
image-20230403230521253.png 160KB
image-20230404231402504.png 132KB
image-20230403230432273.png 124KB
image-20230403230425094.png 124KB
image-20230417145855517.png 121KB
image-20230404231402497.png 120KB
image-20230403231402493.png 112KB
image-20230403230512777.png 105KB
preview3.png 104KB
preview2.png 103KB
preview1.png 101KB
image-20230403231402483.png 86KB
image-20230403231402486.png 82KB
image-20230403230521252.png 82KB
image-20230404231402503.png 81KB
image-20230403231402481.png 81KB
image-20230403231402485.png 81KB
image-20230403231402487.png 80KB
image-20230403231402482.png 79KB
image-20230403231402488.png 78KB
image-20230403231402490.png 77KB
image-20230403231402489.png 76KB
image-20230404231402502.png 76KB
image-20230404231402501.png 74KB
image-20230404231402494.png 71KB
image-20230403231402484.png 70KB
image-20230404231402495.png 69KB
image-20230404231402496.png 69KB
image-20230404231402493.png 68KB
image-20230404231402492.png 65KB
image-20230404231402499.png 56KB
image-20230404231402500.png 55KB
image-20230404231402498.png 51KB
image-20230417145640092.png 48KB
image-20230403231402491.png 32KB
logo-text-2.png 30KB
logo_backup.png 25KB
LOGO.png 25KB
logo.png 15KB
logo-text-1_2.png 14KB
logo_2.png 13KB
共 391 条
- 1
- 2
- 3
- 4
资源评论
hakesashou
- 粉丝: 3861
- 资源: 901
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功