# self-supervised-depth-completion
<p align="center">
<img src="https://j.gifs.com/rRrOW4.gif" alt="photo not available" height="50%">
</p>
## Dependency
This code was tested with Python 3 and PyTorch 1.0 on Ubuntu 16.04.
```bash
pip install numpy matplotlib Pillow
pip install torch torchvision # pytorch
# for self-supervised training requires opencv, along with the contrib modules
pip install opencv-contrib-python==3.4.2.16
```
## Data
- Download the [KITTI Depth](http://www.cvlibs.net/datasets/kitti/eval_depth.php?benchmark=depth_completion) Dataset from their website. Use the following scripts to extract corresponding RGB images from the raw dataset.
```bash
./download/rgb_train_downloader.sh
./download/rgb_val_downloader.sh
```
The downloaded rgb files will be stored in the `../data/data_rgb` folder. The overall code, data, and results directory is structured as follows (updated on Oct 1, 2019)
```
.
├── self-supervised-depth-completion
├── data
| ├── data_depth_annotated
| | ├── train
| | ├── val
| ├── data_depth_velodyne
| | ├── train
| | ├── val
| ├── depth_selection
| | ├── test_depth_completion_anonymous
| | ├── test_depth_prediction_anonymous
| | ├── val_selection_cropped
| └── data_rgb
| | ├── train
| | ├── val
├── results
```
## Trained Models
Download our trained models at http://datasets.lids.mit.edu/self-supervised-depth-completion to a folder of your choice.
- supervised training (i.e., models trained with semi-dense lidar ground truth): http://datasets.lids.mit.edu/self-supervised-depth-completion/supervised/
- self-supervised (i.e., photometric loss + sparse depth loss + smoothness loss): http://datasets.lids.mit.edu/self-supervised-depth-completion/self-supervised/
## Commands
A complete list of training options is available with
```bash
python main.py -h
```
For instance,
```bash
# train with the KITTI semi-dense annotations, rgbd input, and batch size 1
python main.py --train-mode dense -b 1 --input rgbd
# train with the self-supervised framework, not using ground truth
python main.py --train-mode sparse+photo
# resume previous training
python main.py --resume [checkpoint-path]
# test the trained model on the val_selection_cropped data
python main.py --evaluate [checkpoint-path] --val select
```
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
深度估计_基于激光雷达+单目相机的自监督深度估计算法实现_优质项目实战.zip (14个子文件)
深度估计_基于激光雷达+单目相机的自监督深度估计算法实现_优质项目实战
helper.py 11KB
metrics.py 5KB
main.py 13KB
model.py 7KB
vis_utils.py 3KB
criteria.py 3KB
dataloaders
kitti_loader.py 11KB
calib_cam_to_cam.txt 3KB
transforms.py 19KB
pose_estimator.py 3KB
README.md 2KB
inverse_warp.py 5KB
download
rgb_val_downloader.sh 4KB
rgb_train_downloader.sh 4KB
共 14 条
- 1
资源评论
__AtYou__
- 粉丝: 3256
- 资源: 1382
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功