## Introduction
This is the PyTorch implementation of **AtLoc**, a simple and efficient neural architecture for robust visual localization.
#### Demos and Qualitative Results (click below for the video)
<p align="center"> <a href="https://youtu.be/_8NQXBadklU"><img src="./figures/real.gif" width="100%"></a> </p>
## Setup
AtLoc uses a Conda environment that makes it easy to install all dependencies.
1. Install [miniconda](https://docs.conda.io/en/latest/miniconda.html) with Python 2.7.
2. Create the `AtLoc` Conda environment: `conda env create -f environment.yml`.
3. Activate the environment: `conda activate py27pt04`.
4. Note that our code has been tested with PyTorch v0.4.1 (the environment.yml file should take care of installing the appropriate version).
## Data
We support the [7Scenes](https://www.microsoft.com/en-us/research/project/rgb-d-dataset-7-scenes/) and [Oxford RobotCar](http://robotcar-dataset.robots.ox.ac.uk/) datasets right now. You can also write your own PyTorch dataloader for other datasets and put it in the `data` directory.
### Special instructions for RobotCar:
1. Download [this fork](https://github.com/samarth-robo/robotcar-dataset-sdk/tree/master) of the dataset SDK, and run `cd data && ./robotcar_symlinks.sh` after editing the `ROBOTCAR_SDK_ROOT` variable in it appropriately.
2. For each sequence, you need to download the `stereo_centre`, `vo` and `gps` tar files from the dataset website. The directory for each 'scene' (e.g. `loop`) has .txt files defining the train/test_split.
3. To make training faster, we pre-processed the images using `data/process_robotcar.py`. This script undistorts the images using the camera models provided by the dataset, and scales them such that the shortest side is 256 pixels.
4. Pixel and Pose statistics must be calculated before any training. Use the `data/dataset_mean.py`, which also saves the information at the proper location. We provide pre-computed values for RobotCar and 7Scenes.
## Running the code
### Training
The executable script is `train.py`. For example:
- AtLoc on `loop` from `RobotCar`:
```
python train.py --dataset RobotCar --scene loop --model AtLoc --gpus 0
```
- AtLocLstm on `loop` from `RobotCar`:
```
python train.py --dataset RobotCar --scene loop --model AtLoc --lstm True --gpus 0
```
- AtLoc+ on `loop` from `RobotCar`:
```
python train.py --dataset RobotCar --scene loop --model AtLocPlus --gamma -3.0 --gpus 0
```
The meanings of various command-line parameters are documented in train.py. The values of various hyperparameters are defined in `tools/options.py`.
### Inference
The trained models for partial experiments presented in the paper can be downloaded [here](https://drive.google.com/drive/folders/1inY29zupeCmvIF5SsJhQDEzo_jzY0j6Q). The inference script is `eval.py`. Here are some examples, assuming the models are downloaded in `logs`.
- AtLoc on `loop` from `RobotCar`:
```
python eval.py --dataset RobotCar --scene loop --model AtLoc --gpus 0 --weights ./logs/RobotCar_loop_AtLoc_False/models/epoch_300.pth.tar
```
Calculates the network attention visualizations and saves them in a video
- For the AtLoc model trained on `loop` in `RobotCar`:
```
python saliency_map.py --dataset RobotCar --scene loop --model AtLoc --gpus 0 --weights ./logs/RobotCar_loop_AtLoc_False/models/epoch_300.pth.tar
```
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
三维重建-利用注意力机制引导相机进行定位-优质项目实战.zip (102个子文件)
ins.csv 20.21MB
ins.csv 19.63MB
ins.csv 18.93MB
ins.csv 18.86MB
vo.csv 3.88MB
vo.csv 3.84MB
vo.csv 3.81MB
vo.csv 3.75MB
ins.csv 1.86MB
ins.csv 1.79MB
ins.csv 1.64MB
ins.csv 1.64MB
gps.csv 1.45MB
gps.csv 1.41MB
gps.csv 1.36MB
gps.csv 1.36MB
vo.csv 383KB
vo.csv 374KB
vo.csv 355KB
vo.csv 339KB
gps.csv 137KB
gps.csv 132KB
gps.csv 121KB
gps.csv 121KB
.DS_Store 6KB
real.gif 13MB
trajectory.jpg 46KB
README.md 3KB
mapnet-full1.png 71KB
AtLocS-full1.png 66KB
posenet-full1.png 65KB
AtLoc-full1.png 63KB
posenet-loop1.png 59KB
posenet-loop2.png 59KB
AtLoc-loop1.png 49KB
AtLocS-loop1.png 48KB
mapnet-loop1.png 47KB
mapnet-loop2.png 46KB
AtLoc-loop2.png 45KB
Stereo-full1.png 45KB
AtLocS-loop2.png 42KB
Stereo-loop1.png 26KB
Stereo-loop2.png 26KB
dataloaders.py 12KB
train.py 7KB
utils.py 6KB
eval.py 5KB
saliency_map.py 4KB
options.py 4KB
atloc.py 4KB
process_robotcar.py 2KB
dataset_mean.py 2KB
att.py 996B
__init__.py 0B
__init__.py 0B
__init__.py 0B
robotcar_symlinks.sh 396B
stereo.timestamps 656KB
stereo.timestamps 648KB
stereo.timestamps 646KB
stereo.timestamps 633KB
stereo.timestamps 64KB
stereo.timestamps 62KB
stereo.timestamps 59KB
stereo.timestamps 56KB
stats.txt 150B
stats.txt 150B
stats.txt 150B
stats.txt 150B
stats.txt 150B
stats.txt 150B
stats.txt 150B
test_split.txt 86B
pose_stats.txt 77B
pose_stats.txt 76B
train_split.txt 72B
train_split.txt 61B
stats.txt 60B
stats.txt 60B
pose_stats.txt 60B
pose_stats.txt 60B
pose_stats.txt 60B
pose_stats.txt 60B
pose_stats.txt 60B
test_split.txt 60B
pose_stats.txt 60B
pose_stats.txt 60B
train_split.txt 40B
train_split.txt 40B
test_split.txt 40B
train_split.txt 40B
train_split.txt 40B
train_split.txt 40B
test_split.txt 39B
test_split.txt 20B
test_split.txt 20B
test_split.txt 20B
train_split.txt 20B
test_split.txt 20B
train_split.txt 10B
共 102 条
- 1
- 2
资源评论
__AtYou__
- 粉丝: 3251
- 资源: 1247
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功