# High-level Semantic Feature Detection: A New Perspective for Pedestrian Detection
Keras implementation of [CSP] accepted by CVPR 2019. A pytorch implementation is included in [Pedestron](https://github.com/hasanirtiza/Pedestron).
## Introduction
This paper provides a new perspective for detecting pedestrians where detection is formulated as Center and Scale Prediction (CSP), the pipeline is illustrated in the following. For more details, please refer to our [paper](./docs/2019CVPR-CSP.pdf).
![img01](./docs/pipeline.png)
Besides the superority on pedestrian detection demonstrated in the paper, we take a step further towards the generablity of CSP and validate it on face detection. Experimental reults on WiderFace benchmark also show the competitiveness of CSP.
![img02](./docs/face.jpg)
### Dependencies
* Python 2.7
* Tensorflow 1.4.1
* Keras 2.0.6
* OpenCV 3.4.1.15
## Contents
1. [Installation](#installation)
2. [Preparation](#preparation)
3. [Training](#training)
4. [Test](#test)
5. [Evaluation](#evaluation)
6. [Models](#models)
### Installation
1. Get the code. We will call the cloned directory as '$CSP'.
```
git clone https://github.com/liuwei16/CSP.git
```
2. Install the requirments.
```
pip install -r requirements.txt
```
### Preparation
1. Download the dataset.
For pedestrian detection, we train and test our model on [Caltech](http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/) and [CityPersons](https://bitbucket.org/shanshanzhang/citypersons), you should firstly download the datasets. By default, we assume the dataset is stored in `./data/`.
2. Dataset preparation.
For Caltech, you can follow [./eval_caltech/extract_img_anno.m](./eval_caltech/extract_img_anno.m) to extract official seq files into images. Training and test are based on the new annotations provided by [Shanshan2016CVPR](https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/people-detection-pose-estimation-and-tracking/how-far-are-we-from-solving-pedestrian-detection/). We use the train_10x setting (42782 images) for training, the official test set has 4024 images. By default, we assume that images and annotations are stored in `./data/caltech`, and the directory structure is
```
*DATA_PATH
*train_3
*annotations_new
*set00_V000_I00002.txt
*...
*images
*set00_V000_I00002.jpg
*...
*test
*annotations_new
*set06_V000_I00029.jpg.txt
*...
*images
*set06_V000_I00029.jpg
*...
```
For citypersons, we use the training set (2975 images) for training and test on the validation set (500 images), we assume that images and annotations are stored in `./data/citypersons`, and the directory structure is
```
*DATA_PATH
*annotations
*anno_train.mat
*anno_val.mat
*images
*train
*val
```
We have provided the cache files of training and validation subsets. Optionally, you can also follow the [./generate_cache_caltech.py](./generate_cache_caltech.py) and [./generate_cache_city.py](./generate_cache_city.py) to create the cache files for training and validation. By default, we assume the cache files is stored in `./data/cache/`. For Caltech, we split the training set into images with and without any pedestrian instance, resulting in 9105 and 33767 images.
3. Download the initialized models.
We use the backbone [ResNet-50](https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5) and [MobileNet_v1](https://github.com/fchollet/deep-learning-models/releases/download/v0.6/) in our experiments. By default, we assume the weight files is stored in `./data/models/`.
### Training
Optionally, you should set the training parameters in [./keras_csp/config.py](./keras_csp/config.py)
1. Train on Caltech.
Follow the [./train_caltech.py](./train_caltech.py) to start training. The weight files of all epochs will be saved in `./output/valmodels/caltech`.
2. Train on CityPersons.
Follow the [./train_city.py](./train_city.py) to start training. The weight files of all epochs will be saved in `./output/valmodels/city`.
### Test
1. Caltech.
Follow the [./test_caltech.py](./test_caltech.py) to get the detection results. You can test from epoch 51, and the results will be saved in `./output/valresults/caltech`.
2. CityPersons.
Follow the [./test_city.py](./test_city.py) to get the detection results. You can test from epoch 51, and the results will be saved in `./output/valresults/city`.
### Evaluation
1. Caltech.
Follow the [./eval_caltech/dbEval.m](./eval_caltech/dbEval.m) to get the Miss Rates of detections in `pth.resDir` defined in line 25. Finally, evaluation results will be saved as `eval-newReasonable.txt` in `./eval_caltech/ResultsEval`.
2. CityPersons.
(1) Follow the [./eval_city/dt_txt2json.m](./eval_city/dt_txt2json.m) to convert the '.txt' files to '.json'. The specific `main_path` is defined in line 3.
(2) Follow the [./eval_city/eval_script/eval_demo.py](./eval_city/eval_script/eval_demo.py) to get the Miss Rates of detections in `main_path` defined in line 9.
### Models
To reproduce the results in our paper, we have provided the models trained from different datasets. You can download them through [BaiduYun](https://pan.baidu.com/s/1SSPQnbDP6zf9xf8eCDi3Fw) (Code: jcgd). For Caltech, please make sure that the version of OpenCV is 3.4.1.15, other versions will read the same image into different data values, resulting in slightly different performance.
1. For Caltech
ResNet-50 initialized from ImageNet:
Height prediction: [model_CSP/caltech/fromimgnet/h/nooffset](https://pan.baidu.com/s/1SSPQnbDP6zf9xf8eCDi3Fw)
Height+Offset prediciton: [model_CSP/caltech/fromimgnet/h/withoffset](https://pan.baidu.com/s/1SSPQnbDP6zf9xf8eCDi3Fw)
Height+Width prediciton: [model_CSP/caltech/fromimgnet/h+w/](https://pan.baidu.com/s/1SSPQnbDP6zf9xf8eCDi3Fw)
ResNet-50 initialized from CityPersons:
Height+Offset prediciton: [model_CSP/caltech/fromcity/](https://pan.baidu.com/s/1SSPQnbDP6zf9xf8eCDi3Fw)
2. For CityPersons:
Height prediction: [model_CSP/cityperson/nooffset](https://pan.baidu.com/s/1SSPQnbDP6zf9xf8eCDi3Fw)
Height+Offset prediction: [model_CSP/cityperson/withoffset](https://pan.baidu.com/s/1SSPQnbDP6zf9xf8eCDi3Fw)
Upon this codebase, we also have 10 trails on Height+Offset prediction. Generally, models will be converged after epoch 50. For Caltech and CityPersons, we test the results from epoch 50 to 120 and from epoch 50 to 150, respectively, and get the best result (*MR* under Reasonable setting) given in the following table.
| Trial | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|:-----:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:--:|
|Caltech| 4.98 | 4.75 | 4.57 | 4.84 | 4.72 | 4.15 | 5.17 | 4.60 | 4.63 | 4.91 |
|CityPersons| 11.31 | 11.17 | 11.42 | 11.69 | 11.56 | 11.05 | 11.59 | 11.78 | 11.27 | 10.62 |
### Extension--Face Detection
1. Data preparation
You should firstly download the [WiderFace](http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/) dataset and put it in `./data/WiderFace`. We have provided the cache files in `./data/cache/widerface` or you can follow [./genetrate_cache_wider.py](./genetrate_cache_wider.py) to cerate them.
2. Training and Test
For face detection, CSP is required to predict both height and width of each instance with various aspect ratios. You can follow the [./train_wider.py](./train_wider.py) to start training and [./test_wider_ms.py](./test_wider_ms.py) for multi-scale test. As a common practice, the model trained on the official training set is evaluated on both validation and test set, and the results are submitted to [WiderFace](http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/). To reproduce the result in the benchmark, we provide the model for Height+Width+Offset prediction in [model_CSP/widerface/](https://pan.baidu.com/s/1SSPQnbDP6zf9xf8eCDi3Fw).
Note that we adopt the similar data-augment
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
High-level Semantic Feature Detection: A New Perspective for Pedestrian Detection Keras implementation of [CSP] accepted by CVPR 2019. A pytorch implementation is included in Pedestron. Introduction This paper provides a new perspective for detecting pedestrians where detection is formulated as Center and Scale Prediction (CSP), the pipeline is illustrated in the following. For more details, please refer to our paper.
资源推荐
资源详情
资源评论
收起资源包目录
CSP 高级语义特征检测:行人检测的新视角 (110个子文件)
maskApi.c 8KB
maskApiMex.c 5KB
gason.cpp 9KB
gasonMex.cpp 9KB
nms_kernel.cu 5KB
.gitignore 287B
.gitignore 15B
gason.h 3KB
maskApi.h 2KB
gpu_nms.hpp 146B
pycocoDemo.ipynb 1.71MB
pycocoEvalDemo.ipynb 4KB
face.jpg 380KB
val_gt.json 891KB
CocoApi.lua 10KB
MaskApi.lua 10KB
cocoDemo.lua 791B
init.lua 498B
env.lua 436B
bbGt.m 33KB
vbb.m 26KB
CocoEval.m 22KB
bbApply.m 21KB
CocoUtils.m 16KB
dbEval.m 15KB
CocoApi.m 13KB
MaskApi.m 5KB
dbInfo.m 4KB
getPrmDflt.m 3KB
getPrmDflt.m 3KB
gason.m 2KB
evalDemo.m 2KB
cocoDemo.m 1KB
dt_txt2json.m 889B
extract_img_anno.m 552B
Makefile 199B
AS.mat 57.97MB
gt-Reasonable.mat 96KB
README.md 8KB
gasonMex.mexa64 37KB
maskApiMex.mexa64 21KB
gasonMex.mexmaci64 40KB
maskApiMex.mexmaci64 23KB
pipeline.png 1.6MB
resnet50.py 33KB
mobilenet.py 25KB
cocoeval.py 24KB
eval_MR_multisetup.py 21KB
coco.py 18KB
coco.py 18KB
data_generators.py 11KB
data_augment.py 8KB
bbox_process.py 7KB
parallel_model.py 7KB
utilsfunc.py 7KB
test_wider_ms.py 6KB
train_caltech.py 6KB
train_city.py 5KB
train_wider.py 5KB
mask.py 4KB
bbox_transform.py 3KB
test_caltech.py 3KB
losses.py 3KB
test_city.py 2KB
keras_layer_L2Normalization.py 2KB
generate_cache_city.py 2KB
generate_cache_caltech.py 2KB
generate_cache_wider.py 1KB
py_cpu_nms.py 1KB
eval_demo.py 965B
nms_wrapper.py 897B
config.py 892B
setup.py 749B
__init__.py 21B
__init__.py 0B
__init__.py 0B
resnet50.pyc 22KB
mobilenet.pyc 16KB
coco.pyc 15KB
eval_MR_multisetup.pyc 14KB
data_generators.pyc 12KB
bbox_process.pyc 8KB
data_augment.pyc 8KB
utilsfunc.pyc 8KB
parallel_model.pyc 7KB
losses.pyc 3KB
bbox_transform.pyc 3KB
keras_layer_L2Normalization.pyc 3KB
config.pyc 1KB
nms_wrapper.pyc 1KB
__init__.pyc 160B
__init__.pyc 156B
_mask.pyx 11KB
cpu_nms.pyx 2KB
gpu_nms.pyx 1KB
coco-scm-1.rockspec 821B
test 1.5MB
test 723KB
train 6.36MB
train_gt 2.24MB
共 110 条
- 1
- 2
资源评论
五轮车
- 粉丝: 1058
- 资源: 280
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功