<img src="https://cdn.comet.ml/img/notebook_logo.png">
# YOLOv5 with Comet
This guide will cover how to use YOLOv5 with [Comet](https://www.comet.com/site/?ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration)
# About Comet
Comet builds tools that help data scientists, engineers, and team leaders accelerate and optimize machine learning and deep learning models.
Track and visualize model metrics in real time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with [Comet Custom Panels](https://www.comet.com/examples/comet-example-yolov5?shareable=YcwMiJaZSXfcEXpGOHDD12vA1&ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration)!
Comet makes sure you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes!
# Getting Started
## Install Comet
```shell
pip install comet_ml
```
## Configure Comet Credentials
There are two ways to configure Comet with YOLOv5.
You can either set your credentials through enviroment variables
**Environment Variables**
```shell
export COMET_API_KEY=<Your Comet API Key>
export COMET_PROJECT_NAME=<Your Comet Project Name> # This will default to 'yolov5'
```
Or create a `.comet.config` file in your working directory and set your credentials there.
**Comet Configuration File**
```
[comet]
api_key=<Your Comet API Key>
project_name=<Your Comet Project Name> # This will default to 'yolov5'
```
## Run the Training Script
```shell
# Train YOLOv5s on COCO128 for 5 epochs
python train.py --img 640 --batch 16 --epochs 5 --data coco128.yaml --weights yolov5s.pt
```
That's it! Comet will automatically log your hyperparameters, command line arguments, training and valiation metrics. You can visualize and analyze your runs in the Comet UI
<img width="1920" alt="yolo-ui" src="https://user-images.githubusercontent.com/7529846/187608607-ff89c3d5-1b8b-4743-a974-9275301b0524.png">
# Try out an Example!
Check out an example of a [completed run here](https://www.comet.com/examples/comet-example-yolov5/a0e29e0e9b984e4a822db2a62d0cb357?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step&ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration)
Or better yet, try it out yourself in this Colab Notebook
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1RG0WOQyxlDlo5Km8GogJpIEJlg_5lyYO?usp=sharing)
# Log automatically
By default, Comet will log the following items
## Metrics
- Box Loss, Object Loss, Classification Loss for the training and validation data
- mAP_0.5, mAP_0.5:0.95 metrics for the validation data.
- Precision and Recall for the validation data
## Parameters
- Model Hyperparameters
- All parameters passed through the command line options
## Visualizations
- Confusion Matrix of the model predictions on the validation data
- Plots for the PR and F1 curves across all classes
- Correlogram of the Class Labels
# Configure Comet Logging
Comet can be configured to log additional data either through command line flags passed to the training script
or through environment variables.
```shell
export COMET_MODE=online # Set whether to run Comet in 'online' or 'offline' mode. Defaults to online
export COMET_MODEL_NAME=<your model name> #Set the name for the saved model. Defaults to yolov5
export COMET_LOG_CONFUSION_MATRIX=false # Set to disable logging a Comet Confusion Matrix. Defaults to true
export COMET_MAX_IMAGE_UPLOADS=<number of allowed images to upload to Comet> # Controls how many total image predictions to log to Comet. Defaults to 100.
export COMET_LOG_PER_CLASS_METRICS=true # Set to log evaluation metrics for each detected class at the end of training. Defaults to false
export COMET_DEFAULT_CHECKPOINT_FILENAME=<your checkpoint filename> # Set this if you would like to resume training from a different checkpoint. Defaults to 'last.pt'
export COMET_LOG_BATCH_LEVEL_METRICS=true # Set this if you would like to log training metrics at the batch level. Defaults to false.
export COMET_LOG_PREDICTIONS=true # Set this to false to disable logging model predictions
```
## Logging Checkpoints with Comet
Logging Models to Comet is disabled by default. To enable it, pass the `save-period` argument to the training script. This will save the
logged checkpoints to Comet based on the interval value provided by `save-period`
```shell
python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--save-period 1
```
## Logging Model Predictions
By default, model predictions (images, ground truth labels and bounding boxes) will be logged to Comet.
You can control the frequency of logged predictions and the associated images by passing the `bbox_interval` command line argument. Predictions can be visualized using Comet's Object Detection Custom Panel. This frequency corresponds to every Nth batch of data per epoch. In the example below, we are logging every 2nd batch of data for each epoch.
**Note:** The YOLOv5 validation dataloader will default to a batch size of 32, so you will have to set the logging frequency accordingly.
Here is an [example project using the Panel](https://www.comet.com/examples/comet-example-yolov5?shareable=YcwMiJaZSXfcEXpGOHDD12vA1&ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration)
```shell
python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--bbox_interval 2
```
### Controlling the number of Prediction Images logged to Comet
When logging predictions from YOLOv5, Comet will log the images associated with each set of predictions. By default a maximum of 100 validation images are logged. You can increase or decrease this number using the `COMET_MAX_IMAGE_UPLOADS` environment variable.
```shell
env COMET_MAX_IMAGE_UPLOADS=200 python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--bbox_interval 1
```
### Logging Class Level Metrics
Use the `COMET_LOG_PER_CLASS_METRICS` environment variable to log mAP, precision, recall, f1 for each class.
```shell
env COMET_LOG_PER_CLASS_METRICS=true python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt
```
## Uploading a Dataset to Comet Artifacts
If you would like to store your data using [Comet Artifacts](https://www.comet.com/docs/v2/guides/data-management/using-artifacts/#learn-more?ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration), you can do so using the `upload_dataset` flag.
The dataset be organized in the way described in the [YOLOv5 documentation](https://docs.ultralytics.com/tutorials/train-custom-datasets/#3-organize-directories). The dataset config `yaml` file must follow the same format as that of the `coco128.yaml` file.
```shell
python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--upload_dataset
```
You can find the uploaded dataset in the Artifacts tab in your Comet Workspace
<img width="1073" alt="artifact-1" src="https://user-images.githubusercontent.com/7529846/186929193-162718bf-ec7b-4eb9-8c3b-86b3763ef8ea.png">
You can preview the data directly in the Comet UI.
<img width="1082" alt="artifact-2" src="https://user-images.githubusercontent.com/7529846/186929215-432c36a9-c109-4eb0-944b-84c2786590d6.png">
Artifacts are versioned and also support adding metadata about the dataset. Comet will automatically log the metadata from your dataset `yaml` file
<img width="963" alt="artifact-3" src="https://user-images.githubusercontent.com/7529846/186929256-9d44d6eb-1a19-42de-889a-bcbca3018f2e.png">
### Using a saved Artifact
If you would like to use a dataset from Comet Artifacts, set the `path` variable in your dataset `yaml` file to point to the f
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
yolov5-物体追踪新功能使用排序跟踪器进行 YOLOv5 对象跟踪添加了对象模糊选项增加了对 Streamlit 仪表板的支持代码可以在 CPU 和 GPU 上运行支持视频/网络摄像头/外部摄像头/IP 流即将推出裁剪并保存检测到的物体的选项仪表板设计增强先决条件Python 3.9(某些情况下 Python 3.7/3.8 也可以使用)运行代码的步骤1 - 克隆存储库git clone https://github.com/RizwanMunawar/yolov5-object-tracking.git2——转到克隆的文件夹。cd yolov5-object-tracking3 - 创建虚拟环境(推荐,如果你不想打扰python包)### For Linux Userspython3 -m venv yolov5objtrackingsource yolov5objtracking/bin/activate### For Window Userspython3 -m venv yolov5objtrackingcd
资源推荐
资源详情
资源评论
收起资源包目录
YOLOv5 对象跟踪 + 检测 + 对象模糊 + 使用 OpenCV、PyTorch 和 Streamlit 的 Streamlit 仪表板.zip (90个子文件)
.github
FUNDING.yml 888B
标签.txt 65B
app.py 6KB
data
Argoverse.yaml 3KB
coco128.yaml 2KB
VisDrone.yaml 3KB
ImageNet.yaml 18KB
xView.yaml 5KB
SKU-110K.yaml 2KB
coco.yaml 2KB
VOC.yaml 3KB
GlobalWheat2020.yaml 2KB
hyps
hyp.scratch-med.yaml 2KB
hyp.VOC.yaml 1KB
hyp.Objects365.yaml 673B
hyp.scratch-high.yaml 2KB
hyp.scratch-low.yaml 2KB
scripts
get_imagenet.sh 2KB
get_coco.sh 2KB
get_coco128.sh 618B
download_weights.sh 590B
Objects365.yaml 9KB
LICENSE 34KB
sort.py 13KB
export.py 29KB
ob_detect.py 14KB
utils
__init__.py 2KB
loss.py 10KB
loggers
__init__.py 17KB
comet
__init__.py 18KB
optimizer_config.json 3KB
comet_utils.py 5KB
hpo.py 7KB
README.md 11KB
wandb
__init__.py 0B
sweep.yaml 2KB
log_dataset.py 1KB
sweep.py 1KB
README.md 11KB
wandb_utils.py 27KB
clearml
__init__.py 0B
clearml_utils.py 7KB
hpo.py 5KB
README.md 10KB
augmentations.py 17KB
metrics.py 14KB
autoanchor.py 7KB
general.py 42KB
activations.py 3KB
downloads.py 7KB
plots.py 22KB
benchmarks.py 7KB
callbacks.py 3KB
dataloaders.py 50KB
torch_utils.py 19KB
autobatch.py 3KB
资源内容.txt 824B
requirements.txt 1KB
obj_det_and_trk_streamlit.py 10KB
models
hub
yolov5x6.yaml 2KB
anchors.yaml 3KB
yolov5-p2.yaml 2KB
yolov5s-ghost.yaml 1KB
yolov5-panet.yaml 1KB
yolov5s6.yaml 2KB
yolov3.yaml 2KB
yolov5-p6.yaml 2KB
yolov5n6.yaml 2KB
yolov5-bifpn.yaml 1KB
yolov5-p7.yaml 2KB
yolov5l6.yaml 2KB
yolov5m6.yaml 2KB
yolov3-spp.yaml 2KB
yolov5-p34.yaml 1KB
yolov3-tiny.yaml 1KB
yolov5-fpn.yaml 1KB
yolov5s-transformer.yaml 1KB
__init__.py 0B
tf.py 25KB
yolov5m.yaml 1KB
yolov5s.yaml 1KB
yolov5l.yaml 1KB
common.py 38KB
experimental.py 4KB
yolov5x.yaml 1KB
yolov5n.yaml 1KB
yolo.py 16KB
README.md 5KB
2.mp4 4.04MB
obj_det_and_trk.py 14KB
共 90 条
- 1
资源评论
徐浪老师
- 粉丝: 8102
- 资源: 8096
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- C++类型萃取技术:深入探究与实践指南
- 在 NVIIDIA 1060 上以 37 fps 的速度运行搭载最新 TensorRT6.0 的 YoloV3 .zip
- 202409010304 吴羡 职业生涯规划书.pdf
- 在 NVIDIA Jetson Nano 上使用 Yolov5 和 openCV 进行摄像头安装、软件和硬件设置以及物体检测的简单过程 .zip
- 知识问答中文微调训练数据集
- 微信小程序-毕设基于SSM的项目申报小程序【代码+论文+PPT】
- 数学运算相关的指令微调数据集
- YOLOv3的多尺度预测机制:技术解析与代码实践
- 在 Keras 中使用 YOLO 进行车辆检测的运行速度为 21FPS.zip
- 在 c++ 中部署 yolov5.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功