# Multi-Object Tracking with Ultralytics YOLO
<img width="1024" src="https://user-images.githubusercontent.com/26833433/243418637-1d6250fd-1515-4c10-a844-a32818ae6d46.png" alt="YOLOv8 trackers visualization">
Object tracking in the realm of video analytics is a critical task that not only identifies the location and class of objects within the frame but also maintains a unique ID for each detected object as the video progresses. The applications are limitless—ranging from surveillance and security to real-time sports analytics.
## Why Choose Ultralytics YOLO for Object Tracking?
The output from Ultralytics trackers is consistent with standard object detection but has the added value of object IDs. This makes it easy to track objects in video streams and perform subsequent analytics. Here's why you should consider using Ultralytics YOLO for your object tracking needs:
- **Efficiency:** Process video streams in real-time without compromising accuracy.
- **Flexibility:** Supports multiple tracking algorithms and configurations.
- **Ease of Use:** Simple Python API and CLI options for quick integration and deployment.
- **Customizability:** Easy to use with custom trained YOLO models, allowing integration into domain-specific applications.
**Video Tutorial:** [Object Detection and Tracking with Ultralytics YOLOv8](https://www.youtube.com/embed/hHyHmOtmEgs?si=VNZtXmm45Nb9s-N-).
## Features at a Glance
Ultralytics YOLO extends its object detection features to provide robust and versatile object tracking:
- **Real-Time Tracking:** Seamlessly track objects in high-frame-rate videos.
- **Multiple Tracker Support:** Choose from a variety of established tracking algorithms.
- **Customizable Tracker Configurations:** Tailor the tracking algorithm to meet specific requirements by adjusting various parameters.
## Available Trackers
Ultralytics YOLO supports the following tracking algorithms. They can be enabled by passing the relevant YAML configuration file such as `tracker=tracker_type.yaml`:
- [BoT-SORT](https://github.com/NirAharon/BoT-SORT) - Use `botsort.yaml` to enable this tracker.
- [ByteTrack](https://github.com/ifzhang/ByteTrack) - Use `bytetrack.yaml` to enable this tracker.
The default tracker is BoT-SORT.
## Tracking
To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLOv8n, YOLOv8n-seg and YOLOv8n-pose.
#### Python
```python
from ultralytics import YOLO
# Load an official or custom model
model = YOLO("yolov8n.pt") # Load an official Detect model
model = YOLO("yolov8n-seg.pt") # Load an official Segment model
model = YOLO("yolov8n-pose.pt") # Load an official Pose model
model = YOLO("path/to/best.pt") # Load a custom trained model
# Perform tracking with the model
results = model.track(
source="https://youtu.be/LNwODJXcvt4", show=True
) # Tracking with default tracker
results = model.track(
source="https://youtu.be/LNwODJXcvt4", show=True, tracker="bytetrack.yaml"
) # Tracking with ByteTrack tracker
```
#### CLI
```bash
# Perform tracking with various models using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" # Official Detect model
yolo track model=yolov8n-seg.pt source="https://youtu.be/LNwODJXcvt4" # Official Segment model
yolo track model=yolov8n-pose.pt source="https://youtu.be/LNwODJXcvt4" # Official Pose model
yolo track model=path/to/best.pt source="https://youtu.be/LNwODJXcvt4" # Custom trained model
# Track using ByteTrack tracker
yolo track model=path/to/best.pt tracker="bytetrack.yaml"
```
As can be seen in the above usage, tracking is available for all Detect, Segment and Pose models run on videos or streaming sources.
## Configuration
### Tracking Arguments
Tracking configuration shares properties with Predict mode, such as `conf`, `iou`, and `show`. For further configurations, refer to the [Predict](https://docs.ultralytics.com/modes/predict/) model page.
#### Python
```python
from ultralytics import YOLO
# Configure the tracking parameters and run the tracker
model = YOLO("yolov8n.pt")
results = model.track(
source="https://youtu.be/LNwODJXcvt4", conf=0.3, iou=0.5, show=True
)
```
#### CLI
```bash
# Configure tracking parameters and run the tracker using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3, iou=0.5 show
```
### Tracker Selection
Ultralytics also allows you to use a modified tracker configuration file. To do this, simply make a copy of a tracker config file (for example, `custom_tracker.yaml`) from [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modify any configurations (except the `tracker_type`) as per your needs.
#### Python
```python
from ultralytics import YOLO
# Load the model and run the tracker with a custom configuration file
model = YOLO("yolov8n.pt")
results = model.track(
source="https://youtu.be/LNwODJXcvt4", tracker="custom_tracker.yaml"
)
```
#### CLI
```bash
# Load the model and run the tracker with a custom configuration file using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
```
For a comprehensive list of tracking arguments, refer to the [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) page.
## Python Examples
### Persisting Tracks Loop
Here is a Python script using OpenCV (`cv2`) and YOLOv8 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`). The `persist=True` argument tells the tracker than the current image or frame is the next in a sequence and to expect tracks from the previous image in the current image.
#### Python
```python
import cv2
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Open the video file
video_path = "path/to/video.mp4"
cap = cv2.VideoCapture(video_path)
# Loop through the video frames
while cap.isOpened():
# Read a frame from the video
success, frame = cap.read()
if success:
# Run YOLOv8 tracking on the frame, persisting tracks between frames
results = model.track(frame, persist=True)
# Visualize the results on the frame
annotated_frame = results[0].plot()
# Display the annotated frame
cv2.imshow("YOLOv8 Tracking", annotated_frame)
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
# Break the loop if the end of the video is reached
break
# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()
```
Please note the change from `model(frame)` to `model.track(frame)`, which enables object tracking instead of simple detection. This modified script will run the tracker on each frame of the video, visualize the results, and display them in a window. The loop can be exited by pressing 'q'.
### Plotting Tracks Over Time
Visualizing object tracks over consecutive frames can provide valuable insights into the movement patterns and behavior of detected objects within a video. With Ultralytics YOLOv8, plotting these tracks is a seamless and efficient process.
In the following example, we demonstrate how to utilize YOLOv8's tracking capabilities to plot the movement of detected objects across multiple video frames. This script involves opening a video file, reading it frame by frame, and utilizing the YOLO model to identify and track various objects. By retaining the center points of the detected bounding boxes and connecting them, we can draw lines that represent the paths followed by the tracked objects.
#### Python
```python
from collections import defaultdict
import cv2
import numpy as np
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("y
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
摘要 基于YOLOv10模型的口罩实时检测系统集成了深度学习、目标检测和实时数据处理的优势,为火灾监测提供了一种高效的解决方案。本文设计并实现了口罩实时检测系统,该系统包括了是否佩戴口罩的数据集、YOLOv10模型、用户界面(UI)和完整的可运行环境。 本文基于YOLOv10深度学习框架,通过7959张图片,训练了一个是否佩戴口罩的目标检测模型,准确率高达95.3%。并基于此模型开发了一款带UI界面的口罩检测系统,可用于实时检测场景中的人群是否佩戴口罩,更方便进行功能的展示。该系统是基于python与PyQT5开发的,支持图片、视频以及摄像头进行实时目标检测,并保存检测结果。 系统的主要流程包括以下几个模块: 1) 数据集准备:收集并标注了大量的样本数据,涵盖多种场景和环境。 2) 模型训练与优化:基于YOLOv10进行火焰烟雾检测模型的训练,通过数据增强与超参数调优提升模型的检测精度与响应速度。 3) 实时检测与UI展示:利用摄像头实时捕捉图像输入系统,经过YOLOv10模型处理后快速识别是否佩戴口罩,同时在用户界面上同步显示检测结果和警报信
资源推荐
资源详情
资源评论
收起资源包目录
基于YOLOv10深度学习的口罩检测系统免费源码【yolo是否佩戴口罩数据集+ui界面+模型+实时检测】 (731个子文件)
QQ2024115-173033-HD_detect_result.avi 51.99MB
main.cc 10KB
CITATION.cff 612B
CNAME 21B
inference.cpp 13KB
inference.cpp 6KB
main.cpp 5KB
main.cpp 2KB
style.css 5KB
style.css 1KB
yolov10s.csv 235KB
yolov10b.csv 235KB
yolov10m.csv 235KB
yolov10l.csv 235KB
yolov10n.csv 235KB
yolov10x.csv 235KB
Dockerfile 4KB
Dockerfile-arm64 2KB
Dockerfile-conda 2KB
Dockerfile-cpu 3KB
Dockerfile-jetson 2KB
Dockerfile-python 2KB
Dockerfile-runner 2KB
项目说明.docx 11KB
.gitignore 2KB
.gitignore 50B
inference.h 2KB
inference.h 2KB
comments.html 2KB
source-file.html 858B
main.html 439B
favicon.ico 9KB
yolov10-main.iml 621B
tutorial.ipynb 35KB
explorer.ipynb 22KB
object_tracking.ipynb 8KB
object_counting.ipynb 6KB
heatmaps.ipynb 6KB
hub.ipynb 4KB
bus.jpg 134KB
zidane.jpg 49KB
extra.js 3KB
predict.md 47KB
cfg.md 42KB
train.md 28KB
model-deployment-options.md 23KB
yolov8.md 20KB
openvino.md 20KB
quickstart.md 19KB
yolo-common-issues.md 17KB
train_custom_data.md 17KB
track.md 16KB
roboflow.md 16KB
model_export.md 15KB
heatmaps.md 15KB
inference-api.md 15KB
isolating-segmentation-objects.md 15KB
pytorch_hub_model_loading.md 14KB
simple-utilities.md 14KB
README.md 13KB
sam.md 13KB
yolo-world.md 13KB
pose.md 12KB
kfold-cross-validation.md 12KB
yolov9.md 12KB
python.md 12KB
architecture_description.md 12KB
object-counting.md 12KB
CI.md 12KB
obb.md 12KB
api.md 12KB
segment.md 12KB
yolo-performance-metrics.md 11KB
multi_gpu_training.md 11KB
projects.md 11KB
classify.md 11KB
detect.md 11KB
hyperparameter_evolution.md 11KB
clearml_logging_integration.md 11KB
ray-tune.md 11KB
comet_logging_integration.md 11KB
neural_magic_pruning_quantization.md 11KB
test_time_augmentation.md 11KB
yolov5.md 11KB
tensorboard.md 10KB
amazon-sagemaker.md 10KB
index.md 10KB
clearml.md 10KB
model_ensembling.md 10KB
android.md 10KB
running_on_jetson_nano.md 10KB
weights-biases.md 10KB
index.md 10KB
fast-sam.md 10KB
hyperparameter-tuning.md 10KB
index.md 10KB
cli.md 9KB
workouts-monitoring.md 9KB
torchscript.md 9KB
dvc.md 9KB
共 731 条
- 1
- 2
- 3
- 4
- 5
- 6
- 8
资源评论
人工智能_SYBH
- 粉丝: 5w+
- 资源: 233
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 手机数据恢复技术及其商业运作模式探析
- 大模型安全实践(2024)
- dotnet-csharp.pdf
- 副业创收策略:高性价比内存卡销售及市场定位分析
- dotnet-csharp-language-reference.pdf
- dotnet-csharp-specification.pdf
- 副业指南之本地流量变现方案:针对宝妈群体的社区团购运营策略
- 负债人群零成本抖音快手知识传播创富指南
- 2021mathorcup数学建模A题论文(后附代码).docx
- 基于SEO优化的高收益写真站点搭建与运营指南
- 基于MATLAB m编程的发动机最优工作曲线计算程序(OOL),在此工作曲线下,发动机燃油消耗最小 hot 文件内含:1、发动机最优工作曲线计算程序m文件;2、发动机万有特性数据excel文件
- 基于Yunzai机器人框架的群互动插件 Gi-plugin 设计源码
- ziyuanaaaaaaaaaa
- 基于Vue框架的JavaScript、TypeScript、CSS网络货运平台移动端小程序设计源码
- 基于HTML、TypeScript、JavaScript的全面运动健康手环App设计源码
- 抖音平台明星周边产品营销策略与获利方法探讨
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功