## High Quality Monocular Depth Estimation via Transfer Learning
## Results
* KITTI
<p align="center"><img style="max-width:500px" src="https://s3-eu-west-1.amazonaws.com/densedepth/densedepth_results_01.jpg" alt="KITTI"></p>
* NYU Depth V2
<p align="center">
<img style="max-width:500px" src="https://s3-eu-west-1.amazonaws.com/densedepth/densedepth_results_02.jpg" alt="NYU Depth v2">
<img style="max-width:500px" src="https://s3-eu-west-1.amazonaws.com/densedepth/densedepth_results_03.jpg" alt="NYU Depth v2 table">
</p>
## Requirements
* This code is tested with Keras 2.2.4, Tensorflow 1.13, CUDA 10.0, on a machine with an NVIDIA Titan V and 16GB+ RAM running on Windows 10 or Ubuntu 16.
* Other packages needed `keras pillow matplotlib scikit-learn scikit-image opencv-python pydot` and `GraphViz` for the model graph visualization and `PyGLM PySide2 pyopengl` for the GUI demo.
* Minimum hardware tested on for inference NVIDIA GeForce 940MX (laptop) / NVIDIA GeForce GTX 950 (desktop).
* Training takes about 24 hours on a single NVIDIA TITAN RTX with batch size 8.
## Pre-trained Models
* [NYU Depth V2](https://drive.google.com/file/d/19dfvGvDfCRYaqxVKypp1fRHwK7XtSjVu/view?usp=sharing) (165 MB)
* [KITTI](https://drive.google.com/file/d/19flUnbJ_6q2xtjuUQvjt1Y1cJRwOr-XY/view?usp=sharing) (165 MB)
## Demos
* After downloading the pre-trained model (nyu.h5), run `python test.py`. You should see a montage of images with their estimated depth maps.
* **[Update]** A Qt demo showing 3D point clouds from the webcam or an image. Simply run `python demo.py`. It requires the packages `PyGLM PySide2 pyopengl`.
<p align="center">
<img style="max-width:500px" src="https://s3-eu-west-1.amazonaws.com/densedepth/densedepth_results_04.jpg" alt="RGBD Demo">
</p>
## Data
* [NYU Depth V2 (50K)](https://tinyurl.com/nyu-data-zip) (4.1 GB): You don't need to extract the dataset since the code loads the entire zip file into memory when training.
* [KITTI](http://www.cvlibs.net/datasets/kitti/): copy the raw data to a folder with the path '../kitti'. Our method expects dense input depth maps, therefore, you need to run a depth [inpainting method](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html) on the Lidar data. For our experiments, we used our [Python re-implmentaiton](https://gist.github.com/ialhashim/be6235489a9c43c6d240e8331836586a) of the Matlab code provided with NYU Depth V2 toolbox. The entire 80K images took 2 hours on an 80 nodes cluster for inpainting. For our training, we used the subset defined [here](https://s3-eu-west-1.amazonaws.com/densedepth/kitti_train.csv).
* [Unreal-1k](https://github.com/ialhashim/DenseDepth): coming soon.
## Training
* Run `python train.py --data nyu --gpus 4 --bs 8`.
## Evaluation
* Download, but don't extract, the ground truth test data from [here](https://s3-eu-west-1.amazonaws.com/densedepth/nyu_test.zip) (1.4 GB). Then simply run `python evaluate.py`.
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
三维项目_通过迁移学习进行高质量的单目深度估计实现_优质项目实战.zip (40个子文件)
三维项目_通过迁移学习进行高质量的单目深度估计实现_优质项目实战
utils.py 5KB
loss.py 673B
evaluate.py 2KB
Tensorflow
Evaluate.ipynb 1KB
loss.py 684B
evaluate.py 3KB
model.py 3KB
data.py 2KB
DenseDepth.ipynb 3KB
layers.py 2KB
fill_depth_colorization.py 3KB
demo_rgb.npy 7.03MB
demo_depth.npy 2.34MB
examples
119_image.png 301KB
626_image.png 320KB
1_image.png 263KB
358_image.png 430KB
267_image.png 384KB
11_image.png 272KB
312_image.png 399KB
308_image.png 370KB
470_image.png 379KB
499_image.png 297KB
377_image.png 341KB
140_image.png 320KB
model.py 3KB
data.py 8KB
PyTorch
utils.py 1KB
loss.py 2KB
model.py 2KB
data.py 5KB
load_weight_from_keras.py 3KB
train.py 4KB
callbacks.py 4KB
train.py 4KB
demo.py 14KB
augment.py 9KB
test.py 1KB
DenseDepth.ipynb 437KB
README.md 3KB
共 40 条
- 1
资源评论
__AtYou__
- 粉丝: 3506
- 资源: 2175
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功