# GaitSet
#### Flexible
The input of GaitSet is a set of silhouettes.
- There are **NOT ANY constrains** on an input,
which means it can contain **any number** of **non-consecutive** silhouettes filmed under **different viewpoints**
with **different walking conditions**.
- As the input is a set, the **permutation** of the elements in the input
will **NOT change** the output at all.
#### Effective
It achieves **Rank@1=95.0%** on [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp)
and **Rank@1=87.1%** on [OU-MVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html),
excluding identical-view cases.
#### Fast
With 8 NVIDIA 1080TI GPUs, it only takes **7 minutes** to conduct an evaluation on
[OU-MVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html) which contains 133,780 sequences
and average 70 frames per sequence.
## What's new
The code and checkpoint for OUMVLP dataset have been released.
See [OUMVLP](#oumvlp) for details.
## Prerequisites
- Python 3.6
- PyTorch 0.4+
- GPU
## Getting started
### Installation
- (Not necessary) Install [Anaconda3](https://www.anaconda.com/download/)
- Install [CUDA 9.0](https://developer.nvidia.com/cuda-90-download-archive)
- install [cuDNN7.0](https://developer.nvidia.com/cudnn)
- Install [PyTorch](http://pytorch.org/)
Noted that our code is tested based on [PyTorch 0.4](http://pytorch.org/)
### Dataset & Preparation
Download [CASIA-B Dataset](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp)
**!!! ATTENTION !!! ATTENTION !!! ATTENTION !!!**
Before training or test, please make sure you have prepared the dataset
by this two steps:
- **Step1:** Organize the directory as:
`your_dataset_path/subject_ids/walking_conditions/views`.
E.g. `CASIA-B/001/nm-01/000/`.
- **Step2:** Cut and align the raw silhouettes with `pretreatment.py`.
(See [pretreatment](#pretreatment) for details.)
Welcome to try different ways of pretreatment but note that
the silhouettes after pretreatment **MUST have a size of 64x64**.
Futhermore, you also can test our code on [OU-MVLP Dataset](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html).
The number of channels and the training batchsize is slightly different for this dataset.
For more detail, please refer to [our paper](https://arxiv.org/abs/1811.06186).
#### Pretreatment
`pretreatment.py` uses the alignment method in
[this paper](https://ipsjcva.springeropen.com/articles/10.1186/s41074-018-0039-6).
Pretreatment your dataset by
```
python pretreatment.py --input_path='root_path_of_raw_dataset' --output_path='root_path_for_output'
```
- `--input_path` **(NECESSARY)** Root path of raw dataset.
- `--output_path` **(NECESSARY)** Root path for output.
- `--log_file` Log file path. #Default: './pretreatment.log'
- `--log` If set as True, all logs will be saved.
Otherwise, only warnings and errors will be saved. #Default: False
- `--worker_num` How many subprocesses to use for data pretreatment. Default: 1
### Configuration
In `config.py`, you might want to change the following settings:
- `dataset_path` **(NECESSARY)** root path of the dataset
(for the above example, it is "gaitdata")
- `WORK_PATH` path to save/load checkpoints
- `CUDA_VISIBLE_DEVICES` indices of GPUs
### Train
Train a model by
```bash
python train.py
```
- `--cache` if set as TRUE all the training data will be loaded at once before the training start.
This will accelerate the training.
**Note that** if this arg is set as FALSE, samples will NOT be kept in the memory
even they have been used in the former iterations. #Default: TRUE
### Evaluation
Evaluate the trained model by
```bash
python test.py
```
- `--iter` iteration of the checkpoint to load. #Default: 80000
- `--batch_size` batch size of the parallel test. #Default: 1
- `--cache` if set as TRUE all the test data will be loaded at once before the transforming start.
This might accelerate the testing. #Default: FALSE
It will output Rank@1 of all three walking conditions.
Note that the test is **parallelizable**.
To conduct a faster evaluation, you could use `--batch_size` to change the batch size for test.
#### OUMVLP
Since the huge differences between OUMVLP and CASIA-B, the network setting on OUMVLP is slightly different.
- The alternated network's code can be found at `./work/OUMVLP_network`. Use them to replace the corresponding files in `./model/network`.
- The checkpoint can be found [here](https://1drv.ms/u/s!AurT2TsSKdxQuWN8drzIv_phTR5m?e=Gfbl3m).
- In `./config.py`, modify `'batch_size': (8, 16)` into `'batch_size': (32,16)`.
- Prepare your OUMVLP dataset according to the instructions in [Dataset & Preparation](#dataset--preparation).
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
步态识别_基于Pytorch实现的灵活+有效+快速的跨视角步态识别算法_附项目源码_优质项目实战.zip (21个子文件)
步态识别_基于Pytorch实现的灵活+有效+快速的跨视角步态识别算法_附项目源码_优质项目实战
work
checkpoint
GaitSet
GaitSet_CASIA-B_73_False_256_0.2_128_full_30-80000-encoder.ptm 9.9MB
GaitSet_CASIA-B_73_False_256_0.2_128_full_30-80000-optimizer.ptm 19.8MB
OUMVLP_network
gaitset.py 4KB
basic_blocks.py 2KB
model
__init__.py 68B
utils
evaluator.py 2KB
__init__.py 134B
data_loader.py 2KB
sampler.py 828B
data_set.py 3KB
model.py 11KB
initialization.py 2KB
network
__init__.py 60B
gaitset.py 5KB
triplet.py 2KB
basic_blocks.py 896B
pretreatment.py 6KB
train.py 660B
test.py 3KB
README.md 5KB
config.py 743B
共 21 条
- 1
资源评论
__AtYou__
- 粉丝: 3263
- 资源: 1502
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- JS实现一维度事件轴动画及鼠标事件
- python的特殊方法 str
- Qt Creator中的“类型层次”视图:深入解析与应用
- stm32c8t6+寄存器点灯+按键点灯
- React+lorca+go文件互传例子 和 fyne框架 文本markdown例子
- Qt Creator中的“构建输出”视图:深入解析与应用
- 在 MATLAB GUI 中动态更新数据:策略与实践
- HO河马优化算法特征选择并同时优化XGBOOST参数数据分类预测(Matlab完整源码和数据)
- 新仿蓝奏网盘地址加密二次解析系统源码蓝奏云php直链加工解析源码附教程.zip
- JSP038高速公路收费管理系统毕业课程源码设计+论文资料
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功