馃摎 This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 馃殌. UPDATED 29 September 2021.
- [About Weights & Biases](#about-weights-&-biases)
- [First-Time Setup](#first-time-setup)
- [Viewing runs](#viewing-runs)
- [Disabling wandb](#disabling-wandb)
- [Advanced Usage: Dataset Versioning and Evaluation](#advanced-usage)
- [Reports: Share your work with the world!](#reports)
## About Weights & Biases
Think of [W&B](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) like GitHub for machine learning models. With a few lines of code, save everything you need to debug, compare and reproduce your models 鈥� architecture, hyperparameters, git commits, model weights, GPU usage, and even datasets and predictions.
Used by top researchers including teams at OpenAI, Lyft, Github, and MILA, W&B is part of the new standard of best practices for machine learning. How W&B can help you optimize your machine learning workflows:
- [Debug](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Free-2) model performance in real time
- [GPU usage](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#System-4) visualized automatically
- [Custom charts](https://wandb.ai/wandb/customizable-charts/reports/Powerful-Custom-Charts-To-Debug-Model-Peformance--VmlldzoyNzY4ODI) for powerful, extensible visualization
- [Share insights](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Share-8) interactively with collaborators
- [Optimize hyperparameters](https://docs.wandb.com/sweeps) efficiently
- [Track](https://docs.wandb.com/artifacts) datasets, pipelines, and production models
## First-Time Setup
<details open>
<summary> Toggle Details </summary>
When you first train, W&B will prompt you to create a new account and will generate an **API key** for you. If you are an existing user you can retrieve your key from https://wandb.ai/authorize. This key is used to tell W&B where to log your data. You only need to supply your key once, and then it is remembered on the same device.
W&B will create a cloud **project** (default is 'YOLOv5') for your training runs, and each new training run will be provided a unique run **name** within that project as project/name. You can also manually set your project and run name as:
```shell
$ python train.py --project ... --name ...
```
YOLOv5 notebook example: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
<img width="960" alt="Screen Shot 2021-09-29 at 10 23 13 PM" src="https://user-images.githubusercontent.com/26833433/135392431-1ab7920a-c49d-450a-b0b0-0c86ec86100e.png">
</details>
## Viewing Runs
<details open>
<summary> Toggle Details </summary>
Run information streams from your environment to the W&B cloud console as you train. This allows you to monitor and even cancel runs in <b>realtime</b> . All important information is logged:
- Training & Validation losses
- Metrics: Precision, Recall, mAP@0.5, mAP@0.5:0.95
- Learning Rate over time
- A bounding box debugging panel, showing the training progress over time
- GPU: Type, **GPU Utilization**, power, temperature, **CUDA memory usage**
- System: Disk I/0, CPU utilization, RAM memory usage
- Your trained model as W&B Artifact
- Environment: OS and Python types, Git repository and state, **training command**
<p align="center"><img width="900" alt="Weights & Biases dashboard" src="https://user-images.githubusercontent.com/26833433/135390767-c28b050f-8455-4004-adb0-3b730386e2b2.png"></p>
</details>
## Disabling wandb
- training after running `wandb disabled` inside that directory creates no wandb run
![Screenshot (84)](https://user-images.githubusercontent.com/15766192/143441777-c780bdd7-7cb4-4404-9559-b4316030a985.png)
- To enable wandb again, run `wandb online`
![Screenshot (85)](https://user-images.githubusercontent.com/15766192/143441866-7191b2cb-22f0-4e0f-ae64-2dc47dc13078.png)
## Advanced Usage
You can leverage W&B artifacts and Tables integration to easily visualize and manage your datasets, models and training evaluations. Here are some quick examples to get you started.
<details open>
<h3> 1: Train and Log Evaluation simultaneousy </h3>
This is an extension of the previous section, but it'll also training after uploading the dataset. <b> This also evaluation Table</b>
Evaluation table compares your predictions and ground truths across the validation set for each epoch. It uses the references to the already uploaded datasets,
so no images will be uploaded from your system more than once.
<details open>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python train.py --upload_data val</code>
![Screenshot from 2021-11-21 17-40-06](https://user-images.githubusercontent.com/15766192/142761183-c1696d8c-3f38-45ab-991a-bb0dfd98ae7d.png)
</details>
<h3>2. Visualize and Version Datasets</h3>
Log, visualize, dynamically query, and understand your data with <a href='https://docs.wandb.ai/guides/data-vis/tables'>W&B Tables</a>. You can use the following command to log your dataset as a W&B Table. This will generate a <code>{dataset}_wandb.yaml</code> file which can be used to train from dataset artifact.
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python utils/logger/wandb/log_dataset.py --project ... --name ... --data .. </code>
![Screenshot (64)](https://user-images.githubusercontent.com/15766192/128486078-d8433890-98a3-4d12-8986-b6c0e3fc64b9.png)
</details>
<h3> 3: Train using dataset artifact </h3>
When you upload a dataset as described in the first section, you get a new config file with an added `_wandb` to its name. This file contains the information that
can be used to train a model directly from the dataset artifact. <b> This also logs evaluation </b>
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python train.py --data {data}_wandb.yaml </code>
![Screenshot (72)](https://user-images.githubusercontent.com/15766192/128979739-4cf63aeb-a76f-483f-8861-1c0100b938a5.png)
</details>
<h3> 4: Save model checkpoints as artifacts </h3>
To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base cammand, where `n` represents checkpoint interval.
You can also log both the dataset and model checkpoints simultaneously. If not passed, only the final model will be logged
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python train.py --save_period 1 </code>
![Screenshot (68)](https://user-images.githubusercontent.com/15766192/128726138-ec6c1f60-639d-437d-b4ee-3acd9de47ef3.png)
</details>
</details>
<h3> 5: Resume runs from checkpoint artifacts. </h3>
Any run can be resumed using artifacts if the <code>--resume</code> argument starts with聽<code>wandb-artifact://</code>聽prefix followed by the run path, i.e,聽<code>wandb-artifact://username/project/runid </code>. This doesn't require the model checkpoint to be present on the local system.
<details>
<summary> <b>Usage</b> </summary>
<b>Code</b> <code> $ python train.py --resume wandb-artifact://{run_path} </code>
![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png)
</details>
<h3> 6: Resume runs from dataset artifact & checkpoint artifacts. </h3>
<b> Local dataset or model checkpoints are not required. This can be used to resume runs directly on a different device </b>
The syntax is same as the previous section, but you'll need to lof both the dataset and model checkpoints as artifacts, i.e, set bot <code>--upload_dataset</code> or
train fro
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
使用说明在zip压缩包 README 文件中,请仔细阅读。 深度学习 数据集划分 数据集中包含3000张真实场景下行车记录仪采集的图片, 其中训练集包含2600张带有标签的图片,测试集包含400张不带有标签的测试集图片。数据集中共有22种细分的人车类型标签。其中,yolov5支持yolo的.txt标签格式,故将json的标签格式先转化为yolo格式,即标签、xy中心坐标、物体宽高,其中后四者均为归一化后的坐标。 在划分训练集和测试集时,以8:2划分训练集和测试集,并采用随机划分的方式,保证数据集的随机性。 数据增强 获得一个表现良好的神经网络模型,往往需要大量的数据作支撑,然而获取新的数据这项工作往往需要花费大量的时间与人工成本。使用数据增强技术,可以充分利用计算机来生成数据,增加数据量,如采用缩放、平移、旋转、色彩变换等方法增强数据,数据增强的好处是能够增加训练样本的数量,同时添加合适的噪声数据,能够提高模型的泛化力。 YOLOv5提供了多种数据增强的方式,除了一些最基本的缩放、裁剪、旋转以外,还提供了Mosaic方法。其主要思想就是将 4 张图片进行随机裁剪、缩放后,再随机排
资源推荐
资源详情
资源评论
收起资源包目录
基于Flask+Yolov5+Redis的深度学习在线监测网站 (172个子文件)
events.out.tfevents.1667899523.dl-221019161455ovv-pod-jupyter-57674b87cc-mscp6.728.0 2.16MB
test.css 1KB
send.css 1KB
results.csv 43KB
results-checkpoint.csv 43KB
Dockerfile 2KB
Dockerfile 821B
Dockerfile-arm64 2KB
Dockerfile-cpu 2KB
clip_image008-1669519970039.gif 600KB
clip_image018-1669519970039.gif 55KB
clip_image020-1669519970039.gif 54KB
clip_image016-1669519970039.gif 34KB
clip_image004-1669519970038.gif 28KB
test.html 1KB
index.html 1KB
test.html 1KB
send.html 606B
send.html 466B
1.jpg 3.44MB
train_batch0.jpg 518KB
train_batch0-checkpoint.jpg 518KB
train_batch2.jpg 496KB
train_batch1.jpg 489KB
train_batch1-checkpoint.jpg 489KB
val_batch2_pred.jpg 396KB
val_batch2_pred-checkpoint.jpg 396KB
val_batch2_labels.jpg 393KB
val_batch0_pred.jpg 345KB
val_batch0_pred-checkpoint.jpg 345KB
val_batch0_labels.jpg 341KB
val_batch1_pred.jpg 308KB
val_batch1_labels.jpg 301KB
labels_correlogram.jpg 207KB
labels.jpg 162KB
00186.jpg 118KB
00077.jpg 116KB
00205.jpg 89KB
00130.jpg 88KB
clip_image006-1669519970039.jpg 70KB
clip_image002-1669519970038.jpg 36KB
clip_image024-1669519970039.jpg 23KB
clip_image026-1669519970039.jpg 18KB
clip_image022-1669519970039.jpg 16KB
README.md 11KB
Readme.md 7KB
README.md 2KB
confusion_matrix.png 419KB
F1_curve.png 419KB
P_curve.png 399KB
R_curve.png 321KB
PR_curve.png 233KB
results.png 208KB
results-checkpoint.png 208KB
2.png 70KB
plus.png 54KB
person_car.pt 40.37MB
best.pt 40.37MB
last.pt 40.37MB
dataloaders.py 61KB
general.py 50KB
common.py 39KB
wandb_utils.py 27KB
tf.py 25KB
yolo.py 23KB
plots.py 21KB
torch_utils.py 19KB
metrics.py 17KB
augmentations.py 14KB
loss.py 11KB
__init__.py 8KB
autoanchor.py 8KB
downloads.py 7KB
benchmarks.py 7KB
experimental.py 5KB
activations.py 5KB
redis_conn.py 4KB
detect_yolov5.py 4KB
autobatch.py 3KB
app.py 3KB
callbacks.py 2KB
restapi.py 1KB
sweep.py 1KB
resume.py 1KB
__init__.py 1KB
log_dataset.py 1KB
drafts.py 980B
predict.py 752B
main.py 446B
example_request.py 368B
drafts2.py 96B
__init__.py 0B
__init__.py 0B
__init__.py 0B
__init__.py 0B
__init__.py 0B
dataloaders.cpython-37.pyc 40KB
general.cpython-37.pyc 38KB
common.cpython-37.pyc 33KB
wandb_utils.cpython-37.pyc 19KB
共 172 条
- 1
- 2
资源评论
hakesashou
- 粉丝: 4107
- 资源: 1035
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功