# Multi-Object Tracking with Ultralytics YOLO
<img width="1024" src="https://user-images.githubusercontent.com/26833433/243418637-1d6250fd-1515-4c10-a844-a32818ae6d46.png" alt="YOLOv8 trackers visualization">
Object tracking in the realm of video analytics is a critical task that not only identifies the location and class of objects within the frame but also maintains a unique ID for each detected object as the video progresses. The applications are limitless—ranging from surveillance and security to real-time sports analytics.
## Why Choose Ultralytics YOLO for Object Tracking?
The output from Ultralytics trackers is consistent with standard object detection but has the added value of object IDs. This makes it easy to track objects in video streams and perform subsequent analytics. Here's why you should consider using Ultralytics YOLO for your object tracking needs:
- **Efficiency:** Process video streams in real-time without compromising accuracy.
- **Flexibility:** Supports multiple tracking algorithms and configurations.
- **Ease of Use:** Simple Python API and CLI options for quick integration and deployment.
- **Customizability:** Easy to use with custom trained YOLO models, allowing integration into domain-specific applications.
**Video Tutorial:** [Object Detection and Tracking with Ultralytics YOLOv8](https://www.youtube.com/embed/hHyHmOtmEgs?si=VNZtXmm45Nb9s-N-).
## Features at a Glance
Ultralytics YOLO extends its object detection features to provide robust and versatile object tracking:
- **Real-Time Tracking:** Seamlessly track objects in high-frame-rate videos.
- **Multiple Tracker Support:** Choose from a variety of established tracking algorithms.
- **Customizable Tracker Configurations:** Tailor the tracking algorithm to meet specific requirements by adjusting various parameters.
## Available Trackers
Ultralytics YOLO supports the following tracking algorithms. They can be enabled by passing the relevant YAML configuration file such as `tracker=tracker_type.yaml`:
- [BoT-SORT](https://github.com/NirAharon/BoT-SORT) - Use `botsort.yaml` to enable this tracker.
- [ByteTrack](https://github.com/ifzhang/ByteTrack) - Use `bytetrack.yaml` to enable this tracker.
The default tracker is BoT-SORT.
## Tracking
To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLOv8n, YOLOv8n-seg and YOLOv8n-pose.
#### Python
```python
from ultralytics import YOLO
# Load an official or custom model
model = YOLO("yolov8n.pt") # Load an official Detect model
model = YOLO("yolov8n-seg.pt") # Load an official Segment model
model = YOLO("yolov8n-pose.pt") # Load an official Pose model
model = YOLO("path/to/best.pt") # Load a custom trained model
# Perform tracking with the model
results = model.track(
source="https://youtu.be/LNwODJXcvt4", show=True
) # Tracking with default tracker
results = model.track(
source="https://youtu.be/LNwODJXcvt4", show=True, tracker="bytetrack.yaml"
) # Tracking with ByteTrack tracker
```
#### CLI
```bash
# Perform tracking with various models using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" # Official Detect model
yolo track model=yolov8n-seg.pt source="https://youtu.be/LNwODJXcvt4" # Official Segment model
yolo track model=yolov8n-pose.pt source="https://youtu.be/LNwODJXcvt4" # Official Pose model
yolo track model=path/to/best.pt source="https://youtu.be/LNwODJXcvt4" # Custom trained model
# Track using ByteTrack tracker
yolo track model=path/to/best.pt tracker="bytetrack.yaml"
```
As can be seen in the above usage, tracking is available for all Detect, Segment and Pose models run on videos or streaming sources.
## Configuration
### Tracking Arguments
Tracking configuration shares properties with Predict mode, such as `conf`, `iou`, and `show`. For further configurations, refer to the [Predict](https://docs.ultralytics.com/modes/predict/) model page.
#### Python
```python
from ultralytics import YOLO
# Configure the tracking parameters and run the tracker
model = YOLO("yolov8n.pt")
results = model.track(
source="https://youtu.be/LNwODJXcvt4", conf=0.3, iou=0.5, show=True
)
```
#### CLI
```bash
# Configure tracking parameters and run the tracker using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" conf=0.3, iou=0.5 show
```
### Tracker Selection
Ultralytics also allows you to use a modified tracker configuration file. To do this, simply make a copy of a tracker config file (for example, `custom_tracker.yaml`) from [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modify any configurations (except the `tracker_type`) as per your needs.
#### Python
```python
from ultralytics import YOLO
# Load the model and run the tracker with a custom configuration file
model = YOLO("yolov8n.pt")
results = model.track(
source="https://youtu.be/LNwODJXcvt4", tracker="custom_tracker.yaml"
)
```
#### CLI
```bash
# Load the model and run the tracker with a custom configuration file using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/LNwODJXcvt4" tracker='custom_tracker.yaml'
```
For a comprehensive list of tracking arguments, refer to the [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) page.
## Python Examples
### Persisting Tracks Loop
Here is a Python script using OpenCV (`cv2`) and YOLOv8 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`). The `persist=True` argument tells the tracker than the current image or frame is the next in a sequence and to expect tracks from the previous image in the current image.
#### Python
```python
import cv2
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("yolov8n.pt")
# Open the video file
video_path = "path/to/video.mp4"
cap = cv2.VideoCapture(video_path)
# Loop through the video frames
while cap.isOpened():
# Read a frame from the video
success, frame = cap.read()
if success:
# Run YOLOv8 tracking on the frame, persisting tracks between frames
results = model.track(frame, persist=True)
# Visualize the results on the frame
annotated_frame = results[0].plot()
# Display the annotated frame
cv2.imshow("YOLOv8 Tracking", annotated_frame)
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
# Break the loop if the end of the video is reached
break
# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()
```
Please note the change from `model(frame)` to `model.track(frame)`, which enables object tracking instead of simple detection. This modified script will run the tracker on each frame of the video, visualize the results, and display them in a window. The loop can be exited by pressing 'q'.
### Plotting Tracks Over Time
Visualizing object tracks over consecutive frames can provide valuable insights into the movement patterns and behavior of detected objects within a video. With Ultralytics YOLOv8, plotting these tracks is a seamless and efficient process.
In the following example, we demonstrate how to utilize YOLOv8's tracking capabilities to plot the movement of detected objects across multiple video frames. This script involves opening a video file, reading it frame by frame, and utilizing the YOLO model to identify and track various objects. By retaining the center points of the detected bounding boxes and connecting them, we can draw lines that represent the paths followed by the tracked objects.
#### Python
```python
from collections import defaultdict
import cv2
import numpy as np
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO("y
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
1、yolov10道路标志检测,包含训练好的玩手机检测权重,以及PR曲线,loss曲线等等,在道路指示牌识别数据集中训练得到的权重,目标类别名为trafficlight;speedlimit;crosswalk;stop共四个类别,并附道路指示牌识别数据集,标签格式为txt和xml两种,分别保存在两个文件夹中 2、数据集和检测结果参考:https://blog.csdn.net/zhiqingAI/article/details/124230743 3、采用pytrch框架,python代码
资源推荐
资源详情
资源评论
收起资源包目录
yolov10道路标志检测,包含训练好的玩手机检测权重 (2000个子文件)
README.md 13KB
README.md 12KB
README.md 7KB
README.md 7KB
readme.md 5KB
README.md 5KB
README.md 3KB
README.md 3KB
readme.md 3KB
README.md 2KB
README.md 2KB
README.md 2KB
README.md 1KB
README.md 624B
README.md 356B
flops.py 212B
car_traffic_216.txt 875B
car_traffic_640.txt 598B
car_traffic_86.txt 564B
car_traffic_2896.txt 496B
car_traffic_1774.txt 487B
car_traffic_4178.txt 432B
car_traffic_4171.txt 426B
car_traffic_2283.txt 416B
car_traffic_1621.txt 409B
car_traffic_2901.txt 408B
car_traffic_1261.txt 407B
car_traffic_1497.txt 407B
car_traffic_1779.txt 404B
car_traffic_4181.txt 401B
car_traffic_1536.txt 400B
car_traffic_3193.txt 396B
car_traffic_14867.txt 391B
car_traffic_14862.txt 386B
car_traffic_4396.txt 375B
car_traffic_14868.txt 374B
car_traffic_5921.txt 370B
car_traffic_4392.txt 361B
car_traffic_14013.txt 360B
car_traffic_5919.txt 360B
car_traffic_2202.txt 359B
car_traffic_682.txt 351B
car_traffic_3849.txt 342B
car_traffic_5922.txt 340B
car_traffic_14042.txt 340B
car_traffic_14020.txt 339B
car_traffic_3870.txt 337B
car_traffic_1844.txt 331B
car_traffic_2094.txt 330B
car_traffic_1508.txt 327B
car_traffic_4127.txt 327B
car_traffic_1245.txt 327B
car_traffic_2667.txt 326B
car_traffic_2907.txt 326B
car_traffic_1780.txt 326B
car_traffic_2125.txt 325B
car_traffic_1389.txt 323B
car_traffic_6314.txt 323B
car_traffic_14025.txt 321B
car_traffic_1500.txt 320B
car_traffic_14767.txt 320B
car_traffic_1058.txt 318B
car_traffic_14864.txt 317B
car_traffic_2321.txt 317B
car_traffic_14069.txt 314B
car_traffic_4394.txt 313B
car_traffic_14024.txt 312B
car_traffic_10173.txt 306B
car_traffic_12012.txt 305B
car_traffic_4134.txt 304B
car_traffic_12113.txt 302B
car_traffic_11747.txt 302B
car_traffic_2055.txt 301B
car_traffic_378.txt 301B
car_traffic_735.txt 298B
car_traffic_1067.txt 295B
car_traffic_6080.txt 294B
car_traffic_6303.txt 294B
car_traffic_622.txt 293B
car_traffic_4398.txt 293B
car_traffic_2381.txt 293B
car_traffic_7763.txt 292B
car_traffic_12101.txt 291B
car_traffic_3454.txt 290B
car_traffic_11173.txt 289B
car_traffic_9589.txt 288B
car_traffic_10448.txt 287B
car_traffic_12054.txt 285B
car_traffic_12112.txt 285B
car_traffic_14749.txt 280B
car_traffic_14875.txt 280B
car_traffic_6297.txt 280B
car_traffic_6741.txt 278B
car_traffic_12386.txt 278B
car_traffic_9591.txt 276B
car_traffic_12920.txt 276B
car_traffic_850.txt 276B
car_traffic_14766.txt 275B
car_traffic_2200.txt 274B
car_traffic_3874.txt 273B
共 2000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 20
资源评论
stsdddd
- 粉丝: 3w+
- 资源: 961
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 微信小程序源码-基于移动平台的远程在线诊疗系统-服务端-毕业设计源码-期末大作业.zip
- 微信小程序源码-绘画学习平台-微信端-毕业设计源码-期末大作业.zip
- java实习心得体会ppt
- 微信小程序源码-计算机实验室排课与查询系统-服务端-毕业设计源码-期末大作业.zip
- 微信小程序源码-计算机实验室排课与查询系统-微信端-毕业设计源码-期末大作业.zip
- 微信小程序源码-基于移动平台的远程在线诊疗系统-微信端-毕业设计源码-期末大作业.zip
- 微信小程序源码-家政服务管理系统-微信端-毕业设计源码-期末大作业.zip
- 微信小程序源码-家政服务管理系统-服务端-毕业设计源码-期末大作业.zip
- 微信小程序源码-家政项目小程序-服务端-毕业设计源码-期末大作业.zip
- java试用期转正工作总结
- MinGW环境下编译CEF库基于102最后一个稳定版本编译,已经修改过camke文件和部分代码,可以直接编译(MinGW 6.4,cmake 3.31)
- 微信小程序源码-家政项目小程序-微信端-毕业设计源码-期末大作业.zip
- 微信小程序源码-健身房私教预约系统-微信端-毕业设计源码-期末大作业.zip
- 微信小程序源码-健身房私教预约系统-服务端-毕业设计源码-期末大作业.zip
- FPGA Verilog AD7606驱动代码,包含SPI模式读取和并行模式读取两种,代码注释详细
- 微信小程序源码-考研论坛设计-服务端-毕业设计源码-期末大作业.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功