# Multi-Object Tracking with Ultralytics YOLO
<img width="1024" src="https://user-images.githubusercontent.com/26833433/243418637-1d6250fd-1515-4c10-a844-a32818ae6d46.png" alt="YOLOv8 trackers visualization">
Object tracking in the realm of video analytics is a critical task that not only identifies the location and class of objects within the frame but also maintains a unique ID for each detected object as the video progresses. The applications are limitless—ranging from surveillance and security to real-time sports analytics.
## Why Choose Ultralytics YOLO for Object Tracking?
The output from Ultralytics trackers is consistent with standard object detection but has the added value of object IDs. This makes it easy to track objects in video streams and perform subsequent analytics. Here's why you should consider using Ultralytics YOLO for your object tracking needs:
- **Efficiency:** Process video streams in real-time without compromising accuracy.
- **Flexibility:** Supports multiple tracking algorithms and configurations.
- **Ease of Use:** Simple Python API and CLI options for quick integration and deployment.
- **Customizability:** Easy to use with custom trained YOLO models, allowing integration into domain-specific applications.
**Video Tutorial:** [Object Detection and Tracking with Ultralytics YOLOv8](https://www.youtube.com/embed/hHyHmOtmEgs?si=VNZtXmm45Nb9s-N-).
## Features at a Glance
Ultralytics YOLO extends its object detection features to provide robust and versatile object tracking:
- **Real-Time Tracking:** Seamlessly track objects in high-frame-rate videos.
- **Multiple Tracker Support:** Choose from a variety of established tracking algorithms.
- **Customizable Tracker Configurations:** Tailor the tracking algorithm to meet specific requirements by adjusting various parameters.
## Available Trackers
Ultralytics YOLO supports the following tracking algorithms. They can be enabled by passing the relevant YAML configuration file such as `tracker=tracker_type.yaml`:
- [BoT-SORT](https://github.com/NirAharon/BoT-SORT) - Use `botsort.yaml` to enable this tracker.
- [ByteTrack](https://github.com/ifzhang/ByteTrack) - Use `bytetrack.yaml` to enable this tracker.
The default tracker is BoT-SORT.
## Tracking
To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLOv8n, YOLOv8n-seg and YOLOv8n-pose.
#### Python
```python
from ultralytics import YOLO
# Load an official or custom model
model = YOLO("yolov8n.pt") # Load an official Detect model
model = YOLO("yolov8n-seg.pt") # Load an official Segment model
model = YOLO("yolov8n-pose.pt") # Load an official Pose model
model = YOLO("path/to/starnet_pruned.pt") # Load a custom trained model
# Perform tracking with the model
results = model.track(
source="https://youtu.be/LNwODJXcvt4", show=True
) # Tracking with default tracker
results = model.track(
source="https://youtu.be/LNwODJXcvt4", show=True, tracker="bytetrack.yaml"
) # Tracking with ByteTrack tracker
```
#### CLI
```bash
# Perform tracking with various models using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" # Official Detect model
yolo track model=yolov8n-seg.pt source="https://youtu.be/LNwODJXcvt4" # Official Segment model
yolo track model=yolov8n-pose.pt source="https://youtu.be/LNwODJXcvt4" # Official Pose model
yolo track model=path/to/starnet_pruned.pt source="https://youtu.be/LNwODJXcvt4" # Custom trained model
# Track using ByteTrack tracker
yolo track model=path/to/starnet_pruned.pt tracker="bytetrack.yaml"
```
As can be seen in the above usage, tracking is available for all Detect, Segment and Pose models run on videos or streaming sources.
## Configuration
### Tracking Arguments
Tracking configuration shares properties with Predict mode, such as `conf`, `iou`, and `show`. For further configurations, refer to the [Predict](https://docs.ultralytics.com/modes/predict/) model page.
#### Python
```python
from ultralytics import YOLO
# Configure the tracking parameters and run the tracker
model = YOLO("yolov8n.pt")
results = model.track(
source="https://youtu.be/LNwODJXcvt4", conf=0.3, iou=0.5, show=True
)
```
#### CLI
```bash
# Configure tracking parameters and run the tracker using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3, iou=0.5 show
```
### Tracker Selection
Ultralytics also allows you to use a modified tracker configuration file. To do this, simply make a copy of a tracker config file (for example, `custom_tracker.yaml`) from [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modify any configurations (except the `tracker_type`) as per your needs.
#### Python
```python
from ultralytics import YOLO
# Load the model and run the tracker with a custom configuration file
model = YOLO("yolov8n.pt")
results = model.track(
source="https://youtu.be/LNwODJXcvt4", tracker="custom_tracker.yaml"
)
```
#### CLI
```bash
# Load the model and run the tracker with a custom configuration file using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
```
For a comprehensive list of tracking arguments, refer to the [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) page.
## Python Examples
### Persisting Tracks Loop
Here is a Python script using OpenCV (`cv2`) and YOLOv8 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`). The `persist=True` argument tells the tracker than the current image or frame is the next in a sequence and to expect tracks from the previous image in the current image.
#### Python
```python
import cv2
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Open the video file
video_path = "path/to/video.mp4"
cap = cv2.VideoCapture(video_path)
# Loop through the video frames
while cap.isOpened():
# Read a frame from the video
success, frame = cap.read()
if success:
# Run YOLOv8 tracking on the frame, persisting tracks between frames
results = model.track(frame, persist=True)
# Visualize the results on the frame
annotated_frame = results[0].plot()
# Display the annotated frame
cv2.imshow("YOLOv8 Tracking", annotated_frame)
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
# Break the loop if the end of the video is reached
break
# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()
```
Please note the change from `model(frame)` to `model.track(frame)`, which enables object tracking instead of simple detection. This modified script will run the tracker on each frame of the video, visualize the results, and display them in a window. The loop can be exited by pressing 'q'.
### Plotting Tracks Over Time
Visualizing object tracks over consecutive frames can provide valuable insights into the movement patterns and behavior of detected objects within a video. With Ultralytics YOLOv8, plotting these tracks is a seamless and efficient process.
In the following example, we demonstrate how to utilize YOLOv8's tracking capabilities to plot the movement of detected objects across multiple video frames. This script involves opening a video file, reading it frame by frame, and utilizing the YOLO model to identify and track various objects. By retaining the center points of the detected bounding boxes and connecting them, we can draw lines that represent
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
毕业设计&课设_基于 Streamlit 的展示系统,含多种推理方式及目标检测功能 .zip (866个子文件)
AUTHORS 55B
selective_scan.cpp 21KB
swattention.cpp 4KB
dcnv3_cpu.cpp 2KB
vision.cpp 967B
vision.cpp 716B
qk_rpb_bw_kernel.cu 9KB
dcnv3_cuda.cu 8KB
dcnv4_cuda.cu 7KB
flash_deform_attn_cuda.cu 7KB
av_bw_kernel.cu 6KB
qk_bw_kernel.cu 6KB
qk_rpb_fw_kernel.cu 4KB
av_fw_kernel.cu 4KB
qk_fw_kernel.cu 4KB
selective_scan_fwd_bf16.cu 509B
selective_scan_fwd_fp16.cu 501B
selective_scan_fwd_fp32.cu 495B
selective_scan_bwd_bf16_complex.cu 403B
selective_scan_bwd_bf16_real.cu 399B
selective_scan_bwd_fp16_complex.cu 399B
selective_scan_bwd_fp32_complex.cu 396B
selective_scan_bwd_fp16_real.cu 395B
selective_scan_bwd_fp32_real.cu 392B
dcnv3_im2col_cuda.cuh 53KB
selective_scan_bwd_kernel.cuh 33KB
dcnv4_col2im_cuda.cuh 21KB
flash_deform_col2im_cuda.cuh 21KB
selective_scan_fwd_kernel.cuh 19KB
reverse_scan.cuh 18KB
dcnv4_im2col_cuda.cuh 16KB
flash_deform_im2col_cuda.cuh 15KB
uninitialized_copy.cuh 3KB
.gitmodules 147B
selective_scan_common.h 9KB
common.h 7KB
dcnv4.h 4KB
selective_scan.h 3KB
dcnv3.h 3KB
dcnv3_cuda.h 2KB
dcnv3_cpu.h 2KB
dcnv4_cuda.h 2KB
static_switch.h 1KB
flash_deform_attn_cuda.h 1KB
streamlit.iml 331B
MANIFEST.in 35B
bus.jpg 134KB
待检测结果Pass.jpg 66KB
zidane.jpg 49KB
LICENSE 11KB
README.md 13KB
README.md 6KB
README.md 3KB
README.md 1KB
readme.md 588B
video2.mp4 4.94MB
video3.mp4 4.94MB
video1.mp4 4.94MB
selection.png 799KB
待检测结果Wrong(vest置信度较低).png 524KB
截屏2024-06-19 00.11.01.png 249KB
yolov8s.pt 21.47MB
yolov8n.pt 5.96MB
mobilenetv4_pruned.pt 2.86MB
starnet_pruned.pt 1.54MB
block.py 245KB
metrics.py 86KB
attention.py 77KB
head.py 63KB
tasks.py 56KB
augment.py 52KB
exporter.py 52KB
loss.py 48KB
compress.py 46KB
fadc.py 46KB
plotting.py 44KB
compress.py 43KB
compress.py 43KB
compress.py 43KB
ops.py 40KB
model.py 38KB
__init__.py 37KB
selective_scan_interface.py 36KB
trainer.py 35KB
orepa.py 33KB
utils.py 29KB
tiny_encoder.py 29KB
checks.py 28KB
results.py 28KB
autobackend.py 27KB
UniRepLKNet.py 27KB
torch_utils.py 25KB
encoders.py 25KB
EfficientFormerV2.py 24KB
pkinet.py 24KB
predict.py 24KB
test_selective_scan.py 23KB
SwinTransformer.py 23KB
rmt.py 22KB
atss.py 22KB
共 866 条
- 1
- 2
- 3
- 4
- 5
- 6
- 9
资源评论
pk_xz123456
- 粉丝: 2207
- 资源: 1968
下载权益
C知道特权
VIP文章
课程特权
开通VIP
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 学习 DirectX 教程.zip
- 如何使用 Vulkan 渲染到 DirectX 上下文中.zip
- AI中控无人直播助手 关键词+gpt回复+自动讲解
- 头戴式显示器的立体 DirectX 拦截.zip
- 外部窗口混合器 - 以 VVVV 编写 - 用于混合 Winamp-AVS 窗口 (它无法捕获 DirectX).zip
- 堪萨斯州立大学学生正在完成的 DirectX 项目.zip
- 基于PyTorch实现神经网络图像风格实时迁移和迭代式非实时风格迁移源码+文档说明+模型.zip
- 基于傅里叶变换FFT的海面河流模拟(DirectX11版本)使用Computer Shader实现.zip
- 操作系统实验集合 :实验1-7
- 基于WPF和DirectX的桌面弹幕引擎.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功