## Spatial CNN for Traffic Lane Detection
### Paper
Xingang Pan, Jianping Shi, Ping Luo, Xiaogang Wang, Xiaoou Tang. ["Spatial As Deep: Spatial CNN for Traffic Scene Understanding"](https://arxiv.org/abs/1712.06080), AAAI2018
This code is modified from [fb.resnet.torch](https://github.com/facebook/fb.resnet.torch).
### Introduction
Demo video is available [here](https://youtu.be/ey5XPs1012k).
<img align="middle" width="700" height="280" src="CNNvsSCNN.jpg">
- Spatial CNN enables explicit and effective spatial information propagation between neurons in the same layer of a CNN.
- It is extremly effective in cases where objects have strong shape priors like the long thin continuous property of lane lines.
VGG16+SCNN outperforms ResNet101 on lane detection.
### Requirements
- [Torch](http://torch.ch/docs/getting-started.html), please follow the installation instructions at [fb.resnet.torch](https://github.com/facebook/fb.resnet.torch).
- Matlab (for tools/prob2lines), version R2014a or later.
- Opencv (for tools/lane_evaluation), version 2.4.8 (later 2.4.x should also work).
- Hardware:
For testing, GPU with 3G memory suffices.
For training, we recommend 4xGPU with 12G memory.
### Before Start
1. Clone the SCNN repository
```Shell
git clone https://github.com/XingangPan/SCNN.git
```
We'll call the directory that you cloned SCNN as `$SCNN_ROOT`
2. Download CULane dataset
```Shell
mkdir -p data/CULane
cd data/CULane
```
Download [CULane](https://xingangpan.github.io/projects/CULane.html) dataset and extract here. (Note: If you have downloaded the dataset before 16th April 2018, please update the raw annotations of train&val set as described in the dataset website.)
You should have structure like this:
```Shell
$SCNN_ROOT/data/CULane/driver_xx_xxframe # data folders x6
$SCNN_ROOT/data/CULane/laneseg_label_w16 # lane segmentation labels
$SCNN_ROOT/data/CULane/list # data lists
```
### Testing
1. Download our pre-trained models to `./experiments/pretrained`
```Shell
cd $SCNN_ROOT/experiments/pretrained
```
Download [our best performed model](https://drive.google.com/open?id=1Wv3r3dCYNBwJdKl_WPEfrEOt-XGaROKu) here.
2. Run test script
```Shell
cd $SCNN_ROOT
sh ./experiments/test.sh
```
Testing results (probability map of lane markings) are saved in `experiments/predicts/` by default.
3. Get curve line from probability map
```Shell
cd tools/prob2lines
matlab -nodisplay -r "main;exit" # or you may simply run main.m from matlab interface
```
The generated line coordinates would be saved in `tools/prob2lines/output/` by default.
4. Calculate precision, recall, and F-measure
```Shell
cd $SCNN_ROOT/tools/lane_evaluation
make
sh Run.sh # it may take over 30min to evaluate
```
Note: `Run.sh` evaluate each scenario separately while `run.sh` evaluate the whole. You may use `calTotal.m` to calculate overall performance from all senarios.
By now, you should be able to reproduce our result in the paper.
### Training
1. Download VGG16 pretrained on ImageNet
```Shell
cd $SCNN_ROOT/experiments/models
```
Download VGG16 model [here](https://drive.google.com/open?id=12RLXY6o8gaGMY1K1g6d447Iby9ewVIyV) and move it to `$SCNN_ROOT/experiments/models/vgg`.
2. Generate SCNN model
```Shell
th SCNN-gen.lua
```
The generated model will be saved in `./vgg_SCNN_DULR9_w9` by default.
3. Training SCNN
```Shell
cd $SCNN_ROOT
sh ./experiments/train.sh
```
The training process should start and trained models would be saved in `$SCNN_ROOT/experiments/models/vgg_SCNN_DULR_w9` by default.
Then you can test the trained model following the Testing steps above. If your model position or name is changed, remember to set them to yours accordingly.
### Other Implementations
**Tensorflow** implementation reproduced by [cardwing](https://github.com/cardwing): https://github.com/cardwing/Codes-for-Lane-Detection.
[new!] **Pytorch** implementation reproduced by [voldemortX](https://github.com/voldemortX): https://github.com/voldemortX/pytorch-auto-drive.
### Citing SCNN or CULane
```
@inproceedings{pan2018SCNN,
author = {Xingang Pan, Jianping Shi, Ping Luo, Xiaogang Wang, and Xiaoou Tang},
title = {Spatial As Deep: Spatial CNN for Traffic Scene Understanding},
booktitle = {AAAI Conference on Artificial Intelligence (AAAI)},
month = {February},
year = {2018}
}
```
### Acknowledgment
Most work for building CULane dataset is done by [Xiaohang Zhan](https://xiaohangzhan.github.io/), Jun Li, and Xudong Cao. We thank them for their helpful contribution.
没有合适的资源?快使用搜索试试~ 我知道了~
SCNN:用于行车线检测的空间CNN(AAAI2018)
共48个文件
lua:21个
gitignore:6个
cpp:4个
需积分: 50 4 下载量 147 浏览量
2021-05-04
20:29:01
上传
评论 1
收藏 344KB ZIP 举报
温馨提示
用于行车道检测的空间CNN 纸 潘新刚,石建平,罗平,王小刚,唐小鸥。 AAAI2018, 该代码是从修改的。 介绍 演示视频可。 空间CNN可以在CNN的同一层中的神经元之间进行显式且有效的空间信息传播。 在物体先验形状强的情况下(例如车道线的长而细的连续性),该功能极其有效。 VGG16 + SCNN在车道检测方面优于ResNet101。 要求 ,请按照的安装说明进行。 Matlab(用于工具/ prob2lines),版本R2014a或更高版本。 Opencv(用于工具/ lane_evaluation),版本2.4.8(以后的2.4.x也应该起作用)。 硬件:对于测试,具有3G内存的GPU就足够了。 为了进行培训,我们建议使用具有12G内存的4xGPU。 开始之前 克隆SCNN存储库 git clone https://github.com/XingangPan/SC
资源详情
资源评论
资源推荐
收起资源包目录
SCNN-master.zip (48个子文件)
SCNN-master
models
init.lua 5KB
main.lua 2KB
dataloader.lua 5KB
tools
lane_evaluation
Run.sh 2KB
include
hungarianGraph.hpp 2KB
counter.hpp 1KB
lane_compare.hpp 813B
spline.hpp 572B
src
spline.cpp 5KB
lane_compare.cpp 2KB
counter.cpp 3KB
evaluate.cpp 8KB
run.sh 364B
.gitignore 7B
Makefile 1KB
calTotal.m 427B
prob2lines
main.m 2KB
output
.gitignore 71B
getLane.m 364B
LICENSE 1KB
train.lua 7KB
testLane.lua 4KB
experiments
models
SCNN-gen.lua 4KB
vgg
.gitignore 71B
vgg_SCNN_DULR_w9
.gitignore 71B
train.sh 463B
predicts
.gitignore 71B
test.sh 331B
pretrained
.gitignore 71B
checkpoints.lua 2KB
opts.lua 6KB
README.md 5KB
CNNvsSCNN.jpg 329KB
ParallelCriterion2.lua 1KB
datasets
laneTest-gen.lua 2KB
transforms.lua 10KB
cifar100-gen.lua 2KB
imagenet.lua 3KB
cifar10-gen.lua 2KB
cifar10.lua 1KB
lane-gen.lua 4KB
\ 2KB
laneTest.lua 2KB
imagenet-gen.lua 4KB
README.md 2KB
cifar100.lua 2KB
init.lua 993B
lane.lua 2KB
共 48 条
- 1
PLEASEJUM爬
- 粉丝: 15
- 资源: 4576
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0