馃摎 This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 馃殌. UPDATED 29 September 2021.
* [About Weights & Biases](#about-weights-&-biases)
* [First-Time Setup](#first-time-setup)
* [Viewing runs](#viewing-runs)
* [Advanced Usage: Dataset Versioning and Evaluation](#advanced-usage)
* [Reports: Share your work with the world!](#reports)
## About Weights & Biases
Think of [W&B](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) like GitHub for machine learning models. With a few lines of code, save everything you need to debug, compare and reproduce your models 鈥� architecture, hyperparameters, git commits, model weights, GPU usage, and even datasets and predictions.
Used by top researchers including teams at OpenAI, Lyft, Github, and MILA, W&B is part of the new standard of best practices for machine learning. How W&B can help you optimize your machine learning workflows:
* [Debug](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Free-2) model performance in real time
* [GPU usage](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#System-4) visualized automatically
* [Custom charts](https://wandb.ai/wandb/customizable-charts/reports/Powerful-Custom-Charts-To-Debug-Model-Peformance--VmlldzoyNzY4ODI) for powerful, extensible visualization
* [Share insights](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Share-8) interactively with collaborators
* [Optimize hyperparameters](https://docs.wandb.com/sweeps) efficiently
* [Track](https://docs.wandb.com/artifacts) datasets, pipelines, and production models
## First-Time Setup
<details open>
<summary> Toggle Details </summary>
When you first train, W&B will prompt you to create a new account and will generate an **API key** for you. If you are an existing user you can retrieve your key from https://wandb.ai/authorize. This key is used to tell W&B where to log your data. You only need to supply your key once, and then it is remembered on the same device.
W&B will create a cloud **project** (default is 'YOLOv5') for your training runs, and each new training run will be provided a unique run **name** within that project as project/name. You can also manually set your project and run name as:
```shell
$ python train.py --project ... --name ...
```
YOLOv5 notebook example: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
<img width="960" alt="Screen Shot 2021-09-29 at 10 23 13 PM" src="https://user-images.githubusercontent.com/26833433/135392431-1ab7920a-c49d-450a-b0b0-0c86ec86100e.png">
</details>
## Viewing Runs
<details open>
<summary> Toggle Details </summary>
Run information streams from your environment to the W&B cloud console as you train. This allows you to monitor and even cancel runs in <b>realtime</b> . All important information is logged:
* Training & Validation losses
* Metrics: Precision, Recall, mAP@0.5, mAP@0.5:0.95
* Learning Rate over time
* A bounding box debugging panel, showing the training progress over time
* GPU: Type, **GPU Utilization**, power, temperature, **CUDA memory usage**
* System: Disk I/0, CPU utilization, RAM memory usage
* Your trained model as W&B Artifact
* Environment: OS and Python types, Git repository and state, **training command**
<p align="center"><img width="900" alt="Weights & Biases dashboard" src="https://user-images.githubusercontent.com/26833433/135390767-c28b050f-8455-4004-adb0-3b730386e2b2.png"></p>
</details>
## Advanced Usage
You can leverage W&B artifacts and Tables integration to easily visualize and manage your datasets, models and training evaluations. Here are some quick examples to get you started.
<details open>
<h3>1. Visualize and Version Datasets</h3>
Log, visualize, dynamically query, and understand your data with <a href='https://docs.wandb.ai/guides/data-vis/tables'>W&B Tables</a>. You can use the following command to log your dataset as a W&B Table. This will generate a <code>{dataset}_wandb.yaml</code> file which can be used to train from dataset artifact.
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python utils/logger/wandb/log_dataset.py --project ... --name ... --data .. </code>
![Screenshot (64)](https://user-images.githubusercontent.com/15766192/128486078-d8433890-98a3-4d12-8986-b6c0e3fc64b9.png)
</details>
<h3> 2: Train and Log Evaluation simultaneousy </h3>
This is an extension of the previous section, but it'll also training after uploading the dataset. <b> This also evaluation Table</b>
Evaluation table compares your predictions and ground truths across the validation set for each epoch. It uses the references to the already uploaded datasets,
so no images will be uploaded from your system more than once.
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python utils/logger/wandb/log_dataset.py --data .. --upload_data </code>
![Screenshot (72)](https://user-images.githubusercontent.com/15766192/128979739-4cf63aeb-a76f-483f-8861-1c0100b938a5.png)
</details>
<h3> 3: Train using dataset artifact </h3>
When you upload a dataset as described in the first section, you get a new config file with an added `_wandb` to its name. This file contains the information that
can be used to train a model directly from the dataset artifact. <b> This also logs evaluation </b>
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python utils/logger/wandb/log_dataset.py --data {data}_wandb.yaml </code>
![Screenshot (72)](https://user-images.githubusercontent.com/15766192/128979739-4cf63aeb-a76f-483f-8861-1c0100b938a5.png)
</details>
<h3> 4: Save model checkpoints as artifacts </h3>
To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base cammand, where `n` represents checkpoint interval.
You can also log both the dataset and model checkpoints simultaneously. If not passed, only the final model will be logged
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python train.py --save_period 1 </code>
![Screenshot (68)](https://user-images.githubusercontent.com/15766192/128726138-ec6c1f60-639d-437d-b4ee-3acd9de47ef3.png)
</details>
</details>
<h3> 5: Resume runs from checkpoint artifacts. </h3>
Any run can be resumed using artifacts if the <code>--resume</code> argument starts with聽<code>wandb-artifact://</code>聽prefix followed by the run path, i.e,聽<code>wandb-artifact://username/project/runid </code>. This doesn't require the model checkpoint to be present on the local system.
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python train.py --resume wandb-artifact://{run_path} </code>
![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png)
</details>
<h3> 6: Resume runs from dataset artifact & checkpoint artifacts. </h3>
<b> Local dataset or model checkpoints are not required. This can be used to resume runs directly on a different device </b>
The syntax is same as the previous section, but you'll need to lof both the dataset and model checkpoints as artifacts, i.e, set bot <code>--upload_dataset</code> or
train from <code>_wandb.yaml</code> file and set <code>--save_period</code>
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python train.py --resume wandb-artifact://{run_path} </code>
![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png)
</details>
</details>
<h3> Reports </h3>
W&B Repor
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
运行顺序: 1. 原始训练,得到一个最优mAP等评价指标 2.通过调整BN稀疏值sr,运行train_sparity.py稀疏训练得到一个稍微小的模型 3. 将训练好的last.pt 放到prune.py 中进行剪枝,控制剪枝率; 4. Finetune得到最优模型
资源推荐
资源详情
资源评论
收起资源包目录
模型轻量化-YOLOv5无损剪枝 (418个子文件)
events.out.tfevents.1654609411.DESKTOP-6LATURD.219724.0 260KB
events.out.tfevents.1654609670.DESKTOP-6LATURD.236784.0 260KB
events.out.tfevents.1654606293.DESKTOP-6LATURD.149188.0 222KB
events.out.tfevents.1654606708.DESKTOP-6LATURD.165604.0 222KB
events.out.tfevents.1654606150.DESKTOP-6LATURD.8652.0 40B
labels.cache 196KB
Dockerfile 2KB
Dockerfile 821B
.dockerignore 4KB
.gitignore 4KB
.gitignore 50B
yolov5prune-6.0.iml 557B
tutorial.ipynb 56KB
train_batch1.jpg 692KB
train_batch1.jpg 692KB
train_batch0.jpg 689KB
train_batch0.jpg 689KB
train_batch1.jpg 683KB
train_batch1.jpg 683KB
train_batch0.jpg 677KB
train_batch0.jpg 677KB
train_batch2.jpg 674KB
train_batch2.jpg 659KB
train_batch2.jpg 659KB
val_batch1_labels.jpg 566KB
val_batch0_labels.jpg 565KB
val_batch0_labels.jpg 565KB
val_batch2_labels.jpg 565KB
val_batch2_labels.jpg 565KB
val_batch2_labels.jpg 565KB
val_batch2_labels.jpg 565KB
val_batch0_labels.jpg 565KB
val_batch0_labels.jpg 565KB
val_batch0_labels.jpg 565KB
val_batch0_labels.jpg 565KB
val_batch1_labels.jpg 560KB
val_batch1_labels.jpg 560KB
val_batch1_labels.jpg 560KB
val_batch1_labels.jpg 560KB
val_batch0_pred.jpg 512KB
val_batch0_pred.jpg 512KB
val_batch0_pred.jpg 510KB
val_batch1_pred.jpg 496KB
val_batch1_pred.jpg 496KB
val_batch2_pred.jpg 484KB
val_batch2_pred.jpg 484KB
val_batch1_pred.jpg 389KB
val_batch2_pred.jpg 389KB
val_batch2_pred.jpg 389KB
val_batch1_pred.jpg 342KB
val_batch1_pred.jpg 342KB
val_batch0_pred.jpg 328KB
val_batch0_pred.jpg 327KB
val_batch0_pred.jpg 327KB
0000008_01999_d_0000040.jpg 217KB
0000007_04999_d_0000036.jpg 190KB
0000007_05499_d_0000037.jpg 186KB
0000068_02104_d_0000006.jpg 182KB
0000008_02999_d_0000042.jpg 180KB
0000007_05999_d_0000038.jpg 177KB
0000010_00569_d_0000056.jpg 170KB
0000010_05291_d_0000058.jpg 170KB
0000008_03999_d_0000044.jpg 170KB
0000076_00004_d_0000001.jpg 168KB
0000008_04499_d_0000045.jpg 167KB
0000008_02499_d_0000041.jpg 167KB
0000008_00889_d_0000039.jpg 165KB
0000008_03499_d_0000043.jpg 161KB
0000076_00352_d_0000002.jpg 156KB
0000068_01001_d_0000004.jpg 154KB
0000068_00571_d_0000003.jpg 153KB
0000068_02648_d_0000007.jpg 150KB
0000068_00460_d_0000002.jpg 149KB
0000068_01500_d_0000005.jpg 139KB
0000072_05285_d_0000006.jpg 138KB
0000068_00001_d_0000001.jpg 126KB
0000072_04493_d_0000005.jpg 126KB
0000071_06896_d_0000010.jpg 122KB
0000071_06447_d_0000009.jpg 122KB
0000068_02805_d_0000008.jpg 119KB
0000068_03581_d_0000011.jpg 116KB
0000071_00007_d_0000001.jpg 115KB
0000071_04085_d_0000007.jpg 115KB
0000068_03388_d_0000010.jpg 113KB
0000010_05149_d_0000057.jpg 113KB
0000068_03174_d_0000009.jpg 112KB
0000068_04169_d_0000014.jpg 111KB
0000068_03714_d_0000012.jpg 110KB
0000072_07660_d_0000012.jpg 108KB
0000070_04722_d_0000001.jpg 108KB
0000071_02729_d_0000002.jpg 104KB
0000070_05880_d_0000005.jpg 104KB
0000072_03425_d_0000004.jpg 104KB
0000071_05298_d_0000008.jpg 104KB
0000003_00231_d_0000016.jpg 103KB
0000072_00000_d_0000001.jpg 103KB
0000072_07242_d_0000011.jpg 102KB
0000072_08702_d_0000013.jpg 99KB
0000072_09163_d_0000014.jpg 99KB
0000072_00137_d_0000002.jpg 99KB
共 418 条
- 1
- 2
- 3
- 4
- 5
资源评论
- Z.Z.Y1462024-03-15资源质量不错,和资源描述一致,内容详细,对我很有用。
- 普通网友2023-04-13果断支持这个资源,资源解决了当前遇到的问题,给了新的灵感,感谢分享~
- 不能吃辣椒的长苏2024-03-04资源很受用,资源主总结的很全面,内容与描述一致,解决了我当下的问题。
- monoecious2023-02-21资源内容详实,描述详尽,解决了我的问题,受益匪浅,学到了。
@BangBang
- 粉丝: 9501
- 资源: 76
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功