# ClearML Integration
<img align="center" src="https://github.com/thepycoder/clearml_screenshots/raw/main/logos_dark.png#gh-light-mode-only" alt="Clear|ML"><img align="center" src="https://github.com/thepycoder/clearml_screenshots/raw/main/logos_light.png#gh-dark-mode-only" alt="Clear|ML">
## About ClearML
[ClearML](https://cutt.ly/yolov5-tutorial-clearml) is an [open-source](https://github.com/allegroai/clearml) toolbox designed to save you time â±ï¸.
ð¨ Track every YOLOv5 training run in the <b>experiment manager</b>
ð§ Version and easily access your custom training data with the integrated ClearML <b>Data Versioning Tool</b>
ð¦ <b>Remotely train and monitor</b> your YOLOv5 training runs using ClearML Agent
ð¬ Get the very best mAP using ClearML <b>Hyperparameter Optimization</b>
ð Turn your newly trained <b>YOLOv5 model into an API</b> with just a few commands using ClearML Serving
<br />
And so much more. It's up to you how many of these tools you want to use, you can stick to the experiment manager, or chain them all together into an impressive pipeline!
<br />
<br />
![ClearML scalars dashboard](https://github.com/thepycoder/clearml_screenshots/raw/main/experiment_manager_with_compare.gif)
<br />
<br />
## 𦾠Setting Things Up
To keep track of your experiments and/or data, ClearML needs to communicate to a server. You have 2 options to get one:
Either sign up for free to the [ClearML Hosted Service](https://cutt.ly/yolov5-tutorial-clearml) or you can set up your own server, see [here](https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server). Even the server is open-source, so even if you're dealing with sensitive data, you should be good to go!
1. Install the `clearml` python package:
```bash
pip install clearml
```
1. Connect the ClearML SDK to the server by [creating credentials](https://app.clear.ml/settings/workspace-configuration) (go right top to Settings -> Workspace -> Create new credentials), then execute the command below and follow the instructions:
```bash
clearml-init
```
That's it! You're done ð
<br />
## ð Training YOLOv5 With ClearML
To enable ClearML experiment tracking, simply install the ClearML pip package.
```bash
pip install clearml>=1.2.0
```
This will enable integration with the YOLOv5 training script. Every training run from now on, will be captured and stored by the ClearML experiment manager.
If you want to change the `project_name` or `task_name`, use the `--project` and `--name` arguments of the `train.py` script, by default the project will be called `YOLOv5` and the task `Training`.
PLEASE NOTE: ClearML uses `/` as a delimter for subprojects, so be careful when using `/` in your project name!
```bash
python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
```
or with custom project and task name:
```bash
python train.py --project my_project --name my_training --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
```
This will capture:
- Source code + uncommitted changes
- Installed packages
- (Hyper)parameters
- Model files (use `--save-period n` to save a checkpoint every n epochs)
- Console output
- Scalars (mAP_0.5, mAP_0.5:0.95, precision, recall, losses, learning rates, ...)
- General info such as machine details, runtime, creation date etc.
- All produced plots such as label correlogram and confusion matrix
- Images with bounding boxes per epoch
- Mosaic per epoch
- Validation images per epoch
- ...
That's a lot right? ð¤¯
Now, we can visualize all of this information in the ClearML UI to get an overview of our training progress. Add custom columns to the table view (such as e.g. mAP_0.5) so you can easily sort on the best performing model. Or select multiple experiments and directly compare them!
There even more we can do with all of this information, like hyperparameter optimization and remote execution, so keep reading if you want to see how that works!
<br />
## ð Dataset Version Management
Versioning your data separately from your code is generally a good idea and makes it easy to aqcuire the latest version too. This repository supports supplying a dataset version ID and it will make sure to get the data if it's not there yet. Next to that, this workflow also saves the used dataset ID as part of the task parameters, so you will always know for sure which data was used in which experiment!
![ClearML Dataset Interface](https://github.com/thepycoder/clearml_screenshots/raw/main/clearml_data.gif)
### Prepare Your Dataset
The YOLOv5 repository supports a number of different datasets by using yaml files containing their information. By default datasets are downloaded to the `../datasets` folder in relation to the repository root folder. So if you downloaded the `coco128` dataset using the link in the yaml or with the scripts provided by yolov5, you get this folder structure:
```
..
|_ yolov5
|_ datasets
|_ coco128
|_ images
|_ labels
|_ LICENSE
|_ README.txt
```
But this can be any dataset you wish. Feel free to use your own, as long as you keep to this folder structure.
Next, â ï¸**copy the corresponding yaml file to the root of the dataset folder**â ï¸. This yaml files contains the information ClearML will need to properly use the dataset. You can make this yourself too, of course, just follow the structure of the example yamls.
Basically we need the following keys: `path`, `train`, `test`, `val`, `nc`, `names`.
```
..
|_ yolov5
|_ datasets
|_ coco128
|_ images
|_ labels
|_ coco128.yaml # <---- HERE!
|_ LICENSE
|_ README.txt
```
### Upload Your Dataset
To get this dataset into ClearML as a versionned dataset, go to the dataset root folder and run the following command:
```bash
cd coco128
clearml-data sync --project YOLOv5 --name coco128 --folder .
```
The command `clearml-data sync` is actually a shorthand command. You could also run these commands one after the other:
```bash
# Optionally add --parent <parent_dataset_id> if you want to base
# this version on another dataset version, so no duplicate files are uploaded!
clearml-data create --name coco128 --project YOLOv5
clearml-data add --files .
clearml-data close
```
### Run Training Using A ClearML Dataset
Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 ð models!
```bash
python train.py --img 640 --batch 16 --epochs 3 --data clearml://<your_dataset_id> --weights yolov5s.pt --cache
```
<br />
## ð Hyperparameter Optimization
Now that we have our experiments and data versioned, it's time to take a look at what we can build on top!
Using the code information, installed packages and environment details, the experiment itself is now **completely reproducible**. In fact, ClearML allows you to clone an experiment and even change its parameters. We can then just rerun it with these new parameters automatically, this is basically what HPO does!
To **run hyperparameter optimization locally**, we've included a pre-made script for you. Just make sure a training task has been run at least once, so it is in the ClearML experiment manager, we will essentially clone it and change its hyperparameters.
You'll need to fill in the ID of this `template task` in the script found at `utils/loggers/clearml/hpo.py` and then just run it :) You can change `task.execute_locally()` to `task.execute()` to put it in a ClearML queue and have a remote agent work on it instead.
```bash
# To use optuna, install it first, otherwise you can change the optimizer to just be RandomSearch
pip install optuna
python utils/loggers/clearml/hpo.py
```
![HPO](https://github.com/thepycoder/clearml_screenshots/raw/main/hpo.png)
## 𤯠Remote Execution (advanced)
Running HPO locally is really handy, but what if we want to run our experiments on a remote machine instead? Maybe you have access
没有合适的资源?快使用搜索试试~ 我知道了~
基于Flask部署YOLOv5.zip
共96个文件
py:44个
pyc:33个
md:5个
需积分: 5 2 下载量 132 浏览量
2024-02-10
10:31:27
上传
评论 1
收藏 13.34MB ZIP 举报
温馨提示
【探索人工智能的宝藏之地】 无论您是计算机相关专业的在校学生、老师,还是企业界的探索者,这个项目都是为您量身打造的。无论您是初入此领域的小白,还是寻求更高层次进阶的资深人士,这里都有您需要的宝藏。不仅如此,它还可以作为毕设项目、课程设计、作业、甚至项目初期的立项演示。 【人工智能的深度探索】 人工智能——模拟人类智能的技术和理论,使其在计算机上展现出类似人类的思考、判断、决策、学习和交流能力。这不仅是一门技术,更是一种前沿的科学探索。 【实战项目与源码分享】 我们深入探讨了深度学习的基本原理、神经网络的应用、自然语言处理、语言模型、文本分类、信息检索等领域。更有深度学习、机器学习、自然语言处理和计算机视觉的实战项目源码,助您从理论走向实践,如果您已有一定基础,您可以基于这些源码进行修改和扩展,实现更多功能。 【期待与您同行】 我们真诚地邀请您下载并使用这些资源,与我们一起在人工智能的海洋中航行。同时,我们也期待与您的沟通交流,共同学习,共同进步。让我们在这个充满挑战和机遇的领域中共同探索未来!
资源推荐
资源详情
资源评论
收起资源包目录
基于Flask部署YOLOv5.zip (96个子文件)
资料总结
README.md 476B
CM_yolo_web-main
app.py 3KB
export.py 31KB
utils
__init__.py 2KB
segment
__init__.py 0B
loss.py 8KB
augmentations.py 4KB
metrics.py 5KB
general.py 5KB
plots.py 6KB
__pycache__
general.cpython-310.pyc 4KB
__init__.cpython-310.pyc 125B
general.cpython-38.pyc 4KB
__init__.cpython-38.pyc 165B
dataloaders.py 13KB
loss.py 10KB
loggers
__init__.py 17KB
comet
__init__.py 18KB
optimizer_config.json 3KB
comet_utils.py 5KB
hpo.py 6KB
README.md 10KB
wandb
__init__.py 0B
sweep.yaml 2KB
log_dataset.py 1KB
sweep.py 1KB
README.md 11KB
wandb_utils.py 28KB
clearml
__init__.py 0B
clearml_utils.py 7KB
hpo.py 5KB
README.md 11KB
augmentations.py 17KB
flask_rest_api
example_request.py 368B
restapi.py 1KB
README.md 2KB
metrics.py 14KB
aws
__init__.py 0B
userdata.sh 1KB
mime.sh 780B
resume.py 1KB
autoanchor.py 7KB
general.py 45KB
docker
Dockerfile 2KB
Dockerfile-cpu 2KB
Dockerfile-arm64 2KB
activations.py 3KB
google_app_engine
Dockerfile 821B
app.yaml 174B
additional_requirements.txt 105B
downloads.py 5KB
plots.py 25KB
callbacks.py 3KB
__pycache__
plots.cpython-310.pyc 21KB
metrics.cpython-38.pyc 11KB
torch_utils.cpython-38.pyc 16KB
dataloaders.cpython-310.pyc 42KB
general.cpython-310.pyc 37KB
augmentations.cpython-310.pyc 13KB
torch_utils.cpython-310.pyc 16KB
augmentations.cpython-38.pyc 13KB
dataloaders.cpython-38.pyc 42KB
__init__.cpython-310.pyc 2KB
downloads.cpython-310.pyc 4KB
general.cpython-38.pyc 38KB
metrics.cpython-310.pyc 11KB
autoanchor.cpython-310.pyc 6KB
autoanchor.cpython-38.pyc 6KB
downloads.cpython-38.pyc 4KB
__init__.cpython-38.pyc 2KB
plots.cpython-38.pyc 21KB
dataloaders.py 54KB
torch_utils.py 19KB
triton.py 4KB
autobatch.py 3KB
templates
index.html 2KB
yolov5s.pt 14.12MB
models
__init__.py 0B
tf.py 26KB
yolov5s.yaml 2KB
common.py 54KB
experimental.py 4KB
__pycache__
experimental.cpython-38.pyc 5KB
common.cpython-38.pyc 46KB
yolo.cpython-38.pyc 18KB
__init__.cpython-38.pyc 158B
yolov5x.yaml 1KB
yolo.py 22KB
detect.py 13KB
__pycache__
detect.cpython-310.pyc 8KB
detect.cpython-39.pyc 8KB
app.cpython-310.pyc 3KB
detect.cpython-38.pyc 8KB
export.cpython-38.pyc 25KB
export.cpython-310.pyc 25KB
app.cpython-39.pyc 3KB
共 96 条
- 1
资源评论
妄北y
- 粉丝: 1w+
- 资源: 1万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- KIS旗舰版数据库表结构
- STM32F103C8T6模拟IIC控制4针0.96寸OLED显示屏
- 基于python的多摄像头协同分析的单目标跟踪算法/系统
- 微信小程序源码 数字时钟画布应用 - 创意时间显示工具
- 基于Python的简易微信订餐系统实现
- 基于C++实现KCF算法,用于对框选目标进行跟踪,可运行于linux或类linux系统
- 微信小程序源码 滑动选项卡组件 - 增强移动应用用户体验
- 基于mediapipe在unity中实现姿态追踪python源码+使用说明(高分项目).zip
- TortoiseSVN1.14.5.29465及语言包LanguagePack-1.14.5.29465-x64-zh-CN
- 微信小程序源码 果库 - 一站式水果信息与购买平台
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功