# PyTracking
A general python library for visual tracking algorithms.
## Table of Contents
* [Running a tracker](#running-a-tracker)
* [Overview](#overview)
* [Trackers](#trackers)
* [LWL](#LWL)
* [KYS](#KYS)
* [DiMP](#DiMP)
* [ATOM](#ATOM)
* [ECO](#ECO)
* [Analysis](#analysis)
* [Libs](#libs)
* [Visdom](#visdom)
* [VOT Integration](#vot-integration)
* [Integrating a new tracker](#integrating-a-new-tracker)
## Running a tracker
The installation script will automatically generate a local configuration file "evaluation/local.py". In case the file was not generated, run ```evaluation.environment.create_default_local_file()``` to generate it. Next, set the paths to the datasets you want
to use for evaluations. You can also change the path to the networks folder, and the path to the results folder, if you do not want to use the default paths. If all the dependencies have been correctly installed, you are set to run the trackers.
The toolkit provides many ways to run a tracker.
**Run the tracker on webcam feed**
This is done using the run_webcam script. The arguments are the name of the tracker, and the name of the parameter file. You can select the object to track by drawing a bounding box. **Note:** It is possible to select multiple targets to track!
```bash
python run_webcam.py tracker_name parameter_name
```
**Run the tracker on some dataset sequence**
This is done using the run_tracker script.
```bash
python run_tracker.py tracker_name parameter_name --dataset_name dataset_name --sequence sequence --debug debug --threads threads
```
Here, the dataset_name is the name of the dataset used for evaluation, e.g. ```otb```. See [evaluation.datasets.py](evaluation/datasets.py) for the list of datasets which are supported. The sequence can either be an integer denoting the index of the sequence in the dataset, or the name of the sequence, e.g. ```'Soccer'```.
The ```debug``` parameter can be used to control the level of debug visualizations. ```threads``` parameter can be used to run on multiple threads.
**Run the tracker on a set of datasets**
This is done using the run_experiment script. To use this, first you need to create an experiment setting file in ```pytracking/experiments```. See [myexperiments.py](experiments/myexperiments.py) for reference.
```bash
python run_experiment.py experiment_module experiment_name --dataset_name dataset_name --sequence sequence --debug debug --threads threads
```
Here, ```experiment_module``` is the name of the experiment setting file, e.g. ```myexperiments``` , and ``` experiment_name``` is the name of the experiment setting, e.g. ``` atom_nfs_uav``` .
**Run the tracker on a video file**
This is done using the run_video script.
```bash
python run_video.py tracker_name parameter_name videofile --optional_box optional_box --debug debug
```
Here, ```videofile``` is the path to the video file. You can either draw the box by hand or provide it directly in the ```optional_box``` argument.
## Overview
The tookit consists of the following sub-modules.
- [analysis](analysis): Contains scripts to analyse tracking performance, e.g. obtain success plots, compute AUC score. It also contains a [script](analysis/playback_results.py) to playback saved results for debugging.
- [evaluation](evaluation): Contains the necessary scripts for running a tracker on a dataset. It also contains integration of a number of standard tracking and video object segmentation datasets, namely [OTB-100](http://cvlab.hanyang.ac.kr/tracker_benchmark/index.html), [NFS](http://ci2cv.net/nfs/index.html),
[UAV123](https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx), [Temple128](http://www.dabi.temple.edu/~hbling/data/TColor-128/TColor-128.html), [TrackingNet](https://tracking-net.org/), [GOT-10k](http://got-10k.aitestunion.com/), [LaSOT](https://cis.temple.edu/lasot/), [VOT](http://www.votchallenge.net), [Temple Color 128](http://www.dabi.temple.edu/~hbling/data/TColor-128/TColor-128.html), [DAVIS](https://davischallenge.org), and [YouTube-VOS](https://youtube-vos.org).
- [experiments](experiments): The experiment setting files must be stored here,
- [features](features): Contains tools for feature extraction, data augmentation and wrapping networks.
- [libs](libs): Includes libraries for optimization, dcf, etc.
- [notebooks](notebooks) Jupyter notebooks to analyze tracker performance.
- [parameter](parameter): Contains the parameter settings for different trackers.
- [tracker](tracker): Contains the implementations of different trackers.
- [util_scripts](util_scripts): Some util scripts for e.g. generating packed results for evaluation on GOT-10k and TrackingNet evaluation servers, downloading pre-computed results.
- [utils](utils): Some util functions.
- [VOT](VOT): VOT Integration.
## Trackers
The toolkit contains the implementation of the following trackers.
### LWL
The official implementation for the LWL tracker ([paper](https://arxiv.org/pdf/2003.11540.pdf)).
The tracker implementation file can be found at [tracker.lwl](tracker/lwl).
##### Parameter Files
Two parameter settings are provided. These can be used to reproduce the results or as a starting point for your exploration.
* **[lwl_ytvos](parameter/lwl/lwl_ytvos.py)**: The default parameter setting with ResNet-50 backbone which was used to generate YouTubeVOS results.
* **[lwl_boxinit](parameter/lwl/lwl_boxinit.py)**: The parameters settings used to generate results with bounding box initialization on YouTubeVOS and DAVIS datasets.
### KYS
The official implementation for the KYS tracker ([paper](https://arxiv.org/pdf/2003.11014.pdf)).
The tracker implementation file can be found at [tracker.kys](tracker/kys).
##### Parameter Files
* **[default](parameter/kys/default.py)**: The default parameter setting with ResNet-50 backbone which was used to produce all results in the paper, except on VOT and LaSOT.
* **[default_vot](parameter/kys/default_vot.py)**: The parameters settings used to generate the VOT2018 results in the paper.
### DiMP
The official implementation for the DiMP tracker ([paper](https://arxiv.org/abs/1904.07220)) and PrDiMP tracker ([paper](https://arxiv.org/abs/2003.12565)).
The tracker implementation file can be found at [tracker.dimp](tracker/dimp).
##### Parameter Files
* **[dimp18](parameter/dimp/dimp18.py)**: The default parameter setting with ResNet-18 backbone which was used to produce all DiMP-18 results in the paper, except on VOT.
* **[dimp18_vot](parameter/dimp/dimp18_vot18.py)**: The parameters settings used to generate the DiMP-18 VOT2018 results in the paper.
* **[dimp50](parameter/dimp/dimp50.py)**: The default parameter setting with ResNet-50 backbone which was used to produce all DiMP-50 results in the paper, except on VOT.
* **[dimp50_vot](parameter/dimp/dimp50_vot18.py)**: The parameters settings used to generate the DiMP-50 VOT2018 results in the paper.
* **[prdimp18](parameter/dimp/prdimp18.py)**: The default parameter setting with ResNet-18 backbone which was used to produce all PrDiMP-18 results in the paper, except on VOT.
* **[prdimp50](parameter/dimp/prdimp50.py)**: The default parameter setting with ResNet-50 backbone which was used to produce all PrDiMP-50 results in the paper, except on VOT.
* **[super_dimp](parameter/dimp/super_dimp.py)**: Combines the bounding-box regressor of PrDiMP with the standard DiMP classifier and better training and inference settings.
The difference between the VOT and the non-VOT settings stems from the fact that the VOT protocol measures robustness in a very different manner compared to other benchmarks. In most benchmarks, it is highly important to be able to robustly *redetect* the target after e.g. an occlusion or brief target loss. On the other hand, in VOT the tracker is reset if the prediction does not overlap with the target on a *single* frame. This is then counted as a tracking
没有合适的资源?快使用搜索试试~ 我知道了~
pytracking-master.zip
共244个文件
py:213个
txt:9个
png:9个
需积分: 34 5 下载量 119 浏览量
2021-01-18
16:13:11
上传
评论
收藏 1.47MB ZIP 举报
温馨提示
Probabilistic Regression for Visual Tracking跟踪代码
资源推荐
资源详情
资源评论
收起资源包目录
pytracking-master.zip (244个子文件)
.gitignore 298B
.gitmodules 140B
trackers.ini 399B
analyze_results.ipynb 7KB
LICENSE 34KB
tracker_DiMP.m 1KB
README.md 13KB
INSTALL_win.md 9KB
README.md 9KB
README.md 7KB
MODEL_ZOO.md 4KB
INSTALL.md 4KB
kys_overview.png 242KB
visdom.png 137KB
atom_overview.png 121KB
lwtl_overview.png 109KB
dimp_overview.png 98KB
NFS.png 32KB
LaSOT.png 31KB
UAV123.png 30KB
OTB-100.png 28KB
processing.py 49KB
kys.py 40KB
dimp.py 40KB
atom.py 39KB
sampler.py 28KB
lwl.py 27KB
uavdataset.py 27KB
tpldataset.py 26KB
tracker.py 26KB
nfsdataset.py 23KB
optimizer.py 23KB
processing_utils.py 23KB
plot_results.py 22KB
optimization.py 21KB
otbdataset.py 21KB
tracking.py 19KB
dimpnet.py 19KB
eco.py 17KB
transforms.py 16KB
visdom.py 16KB
vos_base.py 16KB
lwl_box_net.py 14KB
lasotdataset.py 13KB
mobilenetv3.py 12KB
lwl_net.py 11KB
initializer.py 10KB
download_results.py 9KB
resnet.py 9KB
loader.py 9KB
kysnet.py 9KB
augmentation.py 9KB
lovasz_loss.py 8KB
filter.py 8KB
youtubevos.py 8KB
steepestdescent.py 8KB
optim.py 8KB
deep.py 8KB
kys.py 8KB
resnet_mrcnn.py 8KB
base_trainer.py 8KB
super_dimp.py 8KB
lwl_stage1.py 8KB
extract_results.py 8KB
got10k.py 8KB
evaluate_vos.py 8KB
lwl_stage2.py 7KB
atom_iou_net.py 7KB
response_predictor.py 7KB
imagenetvid.py 7KB
multi_object_wrapper.py 7KB
tensorlist.py 7KB
lwl_boxinit.py 7KB
data.py 7KB
synthetic_video_blend.py 7KB
dcf.py 7KB
predictor_wrapper.py 7KB
prdimp50.py 7KB
prdimp18.py 7KB
vos_utils.py 7KB
target_classification.py 7KB
dimp50.py 6KB
dimp18.py 6KB
atom_gmm_sampl.py 6KB
atom_prob_ml.py 6KB
lasot.py 6KB
playback_results.py 6KB
default_vot.py 6KB
multiscale_no_iounet.py 6KB
default.py 6KB
coco_seq.py 6KB
complex.py 6KB
tracking_net.py 6KB
coco.py 6KB
mobile3.py 5KB
default.py 5KB
loading.py 5KB
lvis.py 5KB
segmentation.py 5KB
resnet18_vggm.py 5KB
共 244 条
- 1
- 2
- 3
资源评论
Zxin_187
- 粉丝: 75
- 资源: 5
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功