[](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110)
# RobMOTS Official Evaluation Code
### NEWS: [RobMOTS Challenge](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110) for the [RVSU CVPR'21 Workshop](https://eval.vision.rwth-aachen.de/rvsu-workshop21/) is now live!!!! Challenge deadline June 15.
### NEWS: [Call for short papers](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=74) (4 pages) on tracking and other video topics for [RVSU CVPR'21 Workshop](https://eval.vision.rwth-aachen.de/rvsu-workshop21/)!!!! Paper deadline June 4.
TrackEval is now the Official Evaluation Kit for the RobMOTS Challenge.
This repository contains the official evaluation code for the challenges available at the [RobMOTS Website](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110).
The RobMOTS Challenge tests trackers' ability to work robustly across 8 different benchmarks, while tracking the [80 categories of objects from COCO](https://cocodataset.org/#explore).
The following benchmarks are included:
Benchmark | Website |
|----- | ----------- |
|MOTS Challenge| https://motchallenge.net/results/MOTS/ |
|KITTI-MOTS| http://www.cvlibs.net/datasets/kitti/eval_mots.php |
|DAVIS Challenge Unsupervised| https://davischallenge.org/challenge2020/unsupervised.html |
|YouTube-VIS| https://youtube-vos.org/dataset/vis/ |
|BDD100k MOTS| https://bdd-data.berkeley.edu/ |
|TAO| https://taodataset.org/ |
|Waymo Open Dataset| https://waymo.com/open/ |
|OVIS| http://songbai.site/ovis/ |
## Installing, obtaining the data, and running
Simply follow the code snippet below to install the evaluation code, download the train groundtruth data and an example tracker, and run the evaluation code on the sample tracker.
Note the code requires python 3.5 or higher.
```
# Download the TrackEval repo
git clone https://github.com/JonathonLuiten/TrackEval.git
# Move to repo folder
cd TrackEval
# Create a virtual env in the repo for evaluation
python3 -m venv ./venv
# Activate the virtual env
source venv/bin/activate
# Update pip to have the latest version of packages
pip install --upgrade pip
# Install the required packages
pip install -r requirements.txt
# Download the train gt data
wget https://omnomnom.vision.rwth-aachen.de/data/RobMOTS/train_gt.zip
# Unzip the train gt data you just downloaded.
unzip train_gt.zip
# Download the example tracker
wget https://omnomnom.vision.rwth-aachen.de/data/RobMOTS/example_tracker.zip
# Unzip the example tracker you just downloaded.
unzip example_tracker.zip
# Run the evaluation on the provided example tracker on the train split (using 4 cores in parallel)
python scripts/run_rob_mots.py --ROBMOTS_SPLIT train --TRACKERS_TO_EVAL STP --USE_PARALLEL True --NUM_PARALLEL_CORES 4
```
You may further download the raw sequence images and supplied detections (as well as train GT data and example tracker) by following the ```Data Download``` link here:
[RobMOTS Challenge Info](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110)
## Accessing tracking evaluation results
You will find the results of the evaluation (for the supplied tracker STP) in the folder ```TrackEval/data/trackers/rob_mots/train/STP/```.
The overall summary of the results is in ```./final_results.csv```, and more detailed results per sequence and per class and results plots can be found under ```./results/*```.
The ```final_results.csv``` can be most easily read by opening it in Excel or similar. The ```c```, ```d``` and ```f``` prepending the metric names refer respectively to ```class averaged```, ```detection averaged (class agnostic)``` and ```final``` (the geometric mean of class and detection averaged).
## Supplied Detections
To make creating your own tracker particularly easy, we supply a set of strong supplied detection.
These detections are from the Detectron 2 Mask R-CNN X152 (very bottom model on this [page](https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md) which achieves a COCO detection mAP score of 50.2).
We then obtain segmentation masks for these detections using the Box2Seg Network (also called Refinement Net), which results in far more accurate masks than the default Mask R-CNN masks. The code for this can be found [here](https://github.com/JonathonLuiten/PReMVOS/tree/master/code/refinement_net).
We supply two different supplied detections. The first is the ```raw_supplied``` detections, which is taking all 1000 detections output from the Mask R-CNN, and only removing those for which the maximum class score is less than 0.02 (here no non-maximum suppression, NMS, is run). These can be downloaded [here](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110).
The second is ```non_overlap_supplied``` detections. These are the same detections as above, but with further processing steps applied to them. First we perform Non-Maximum Suppression (NMS) with a threshold of 0.5 to remove any masks which have an IoU of 0.5 or more with any other mask that has a higher score. Second we run a Non-Overlap algorithm which forces all of the masks for a single image to be non-overlapping. It does this by putting all the masks 'on top of' each other, ordered by score, such that masks with a lower score will be partially removed if a mask with a higher score partially overlaps them. Note that these detections are still only thresholded at a score of 0.02, in general we recommend further thresholding with a higher value to get a good balance of precision and recall.
Code for this NMS and Non-Overlap algorithm can be found here:
[Non-Overlap Code](https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/baselines/non_overlap.py).
Note that for RobMOTS evaluation the final tracking results need to be 'non-overlapping' so we recommend using the ```non_overlap_supplied``` detections, however you may use the ```raw_supplied```, or your own or any other detections as you like.
Supplied detections (both raw and non-overlapping) are available for the train, val and test sets.
Example code for reading in these detections and using them can be found here:
[Tracker Example](https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/baselines/stp.py).
## Creating your own tracker
We provide sample code ([Tracker Example](https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/baselines/stp.py)) for our STP tracker (Simplest Tracker Possible) which walks though how to create tracking results in the required RobMOTS format.
This includes code for reading in the supplied detections and writing out the tracking results in the desired format, plus many other useful functions (IoU calculation etc).
## Evaluating your own tracker
To evaluate your tracker, put the results in the folder ```TrackEval/data/trackers/rob_mots/train/```, in a folder alongside the supplied tracker STP with the folder labelled as your tracker name, e.g. YOUR_TRACKER.
You can then run the evaluation code on your tracker like this:
```
python scripts/run_rob_mots.py --ROBMOTS_SPLIT train --TRACKERS_TO_EVAL YOUR_TRACKER --USE_PARALLEL True --NUM_PARALLEL_CORES 4
```
## Data format
For RobMOTS, trackers must submit their results in the following folder format:
```
|—— <Benchmark01>
|—— <Benchmark01SeqName01>.txt
|—— <Benchmark01SeqName02>.txt
|—— <Benchmark01SeqName03>.txt
|—— <Benchmark02>
|—— <Benchmark02SeqName01>.txt
|—— <Benchmark02SeqName02>.txt
|—— <Benchmark02SeqName03>.txt
```
See the supplied STP tracker results (in the Train Data linked above) for an example.
Thus there is one .txt file for each sequence. This file has one row per detection (object mask in one frame). Each row must have 7 values and has the following format:
</p>
<code>
<Timestep>(int),
<Track ID>(int),
&
没有合适的资源?快使用搜索试试~ 我知道了~
基于深度学习的篮球比赛战术数据自动采集及统计系统-deepsort多目标跟踪文件

共4094个文件
jpg:3415个
xml:363个
py:112个

需积分: 0 24 浏览量
2023-01-31
20:59:34
上传
评论
收藏 112.73MB ZIP 举报
温馨提示
基于深度学习的篮球比赛战术数据自动采集及统计系统———deepsort多目标跟踪文件
资源推荐
资源详情
资源评论











收起资源包目录





































































































共 4094 条
- 1
- 2
- 3
- 4
- 5
- 6
- 41
资源评论


Iubco_cc
- 粉丝: 14
- 资源: 1
上传资源 快速赚钱
我的内容管理 收起
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


会员权益专享
安全验证
文档复制为VIP权益,开通VIP直接复制
