# MVSNet & R-MVSNet
<font color="red"> [News] BlendedMVS dataset is released!!!</font> ([project link](https://github.com/YoYo000/BlendedMVS)).
## About
[MVSNet](https://arxiv.org/abs/1804.02505) is a deep learning architecture for depth map inference from unstructured multi-view images, and [R-MVSNet](https://arxiv.org/abs/1902.10556) is its extension for scalable learning-based MVS reconstruction. If you find this project useful for your research, please cite:
```
@article{yao2018mvsnet,
title={MVSNet: Depth Inference for Unstructured Multi-view Stereo},
author={Yao, Yao and Luo, Zixin and Li, Shiwei and Fang, Tian and Quan, Long},
journal={European Conference on Computer Vision (ECCV)},
year={2018}
}
```
```
@article{yao2019recurrent,
title={Recurrent MVSNet for High-resolution Multi-view Stereo Depth Inference},
author={Yao, Yao and Luo, Zixin and Li, Shiwei and Shen, Tianwei and Fang, Tian and Quan, Long},
journal={Computer Vision and Pattern Recognition (CVPR)},
year={2019}
}
```
If [BlendedMVS dataset](https://github.com/YoYo000/BlendedMVS) is used in your research, please also cite:
```
@article{yao2020blendedmvs,
title={BlendedMVS: A Large-scale Dataset for Generalized Multi-view Stereo Networks},
author={Yao, Yao and Luo, Zixin and Li, Shiwei and Zhang, Jingyang and Ren, Yufan and Zhou, Lei and Fang, Tian and Quan, Long},
journal={Computer Vision and Pattern Recognition (CVPR)},
year={2020}
}
```
## How to Use
### Installation
* Check out the source code ```git clone https://github.com/YoYo000/MVSNet```
* Install cuda 9.0, cudnn 7.0 and python 2.7
* Install Tensorflow and other dependencies by ```sudo pip install -r requirements.txt```
### Download
* Preprocessed training/validation data: [BlendedMVS](https://drive.google.com/open?id=1ilxls-VJNvJnB7IaFj7P0ehMPr7ikRCb), [DTU](https://drive.google.com/file/d/1eDjh-_bxKKnEuz5h-HXS7EDJn59clx6V/view) and [ETH3D](https://drive.google.com/open?id=1eqcv0Urr-c3Of8RKmTLhXrY_5cZCUvu5). More training resources could be found in [BlendedMVS github page](https://github.com/YoYo000/BlendedMVS)
* Preprocessed testing data: [DTU testing set](https://drive.google.com/open?id=135oKPefcPTsdtLRzoDAQtPpHuoIrpRI_), [ETH3D testing set](https://drive.google.com/open?id=1hGft7rEFnoSrnTjY_N6vp5j1QBsGcnBB), [Tanks and Temples testing set](https://drive.google.com/open?id=1YArOJaX9WVLJh4757uE8AEREYkgszrCo) and [training set](https://drive.google.com/open?id=1vOfxAMFJUalhZzydzJa1AluRhzG7ZxHS)
* Pretrained models: pretrained on [BlendedMVS](https://drive.google.com/open?id=1HacSpLl49xB77uBuI67ceZV9Rx_8wjPV), on [DTU](https://drive.google.com/open?id=1-1JyFT9ClqPO0kz0d_5I1_IHX05paS4h) and on [ETH3D](https://drive.google.com/open?id=1A3eZch06gkmvj-R0fasX6M_KNWGIOfYG)
### Training
* Enter mvsnet script folder: ``cd MVSNet/mvsnet``
* Train MVSNet on BlendedMVS, DTU and ETH3D: <br>
``python train.py --regularization '3DCNNs' --train_blendedmvs --max_w 768 --max_h 576 --max_d 128 --online_augmentation`` <br>
``python train.py --regularization '3DCNNs' --train_dtu --max_w 640 --max_h 512 --max_d 128`` <br>
``python train.py --regularization '3DCNNs' --train_eth3d --max_w 896 --max_h 480 --max_d 128`` <br>
* Train R-MVSNet: <br>
``python train.py --regularization 'GRU' --train_blendedmvs --max_w 768 --max_h 576 --max_d 128 --online_augmentation`` <br>
``python train.py --regularization 'GRU' --train_dtu --max_w 640 --max_h 512 --max_d 128`` <br>
``python train.py --regularization 'GRU' --train_eth3d --max_w 896 --max_h 480 --max_d 128`` <br>
* Specify your input training data folders using ``--blendedmvs_data_root``, ``--dtu_data_root`` and ``--eth3d_data_root``
* Specify your output log and model folders using ``--log_folder`` and ``--model_folder``
* Switch from BlendeMVS to BlendedMVG by replacing using ``--train_blendedmvs`` with ``--train_blendedmvg``
### Validation
* Validate MVSNet on BlendedMVS, DTU and ETH3D: <br>
``python validate.py --regularization '3DCNNs' --validate_set blendedmvs --max_w 768 --max_h 576 --max_d 128``<br>
``python validate.py --regularization '3DCNNs' --validate_set dtu --max_w 640 --max_h 512 --max_d 128``<br>
``python validate.py --regularization '3DCNNs' --validate_set eth3d --max_w 896 --max_h 480 --max_d 128``<br>
* Validate R-MVSNet: <br>
``python validate.py --regularization 'GRU' --validate_set blendedmvs --max_w 768 --max_h 576 --max_d 128``<br>
``python validate.py --regularization 'GRU' --validate_set dtu --max_w 640 --max_h 512 --max_d 128``<br>
``python validate.py --regularization 'GRU' --validate_set eth3d --max_w 896 --max_h 480 --max_d 128``<br>
* Specify your input model check point using ``--pretrained_model_ckpt_path`` and ``--ckpt_step``
* Specify your input training data folders using ``--blendedmvs_data_root``, ``--dtu_data_root`` and ``--eth3d_data_root``
* Specify your output result file using ``--validation_result_path``
### Testing
* Download test data [scan9](https://drive.google.com/file/d/17ZoojQSubtzQhLCWXjxDLznF2vbKz81E/view?usp=sharing) and unzip it to ``TEST_DATA_FOLDER`` folder
* Run MVSNet (GTX1080Ti): <br>
``python test.py --dense_folder TEST_DATA_FOLDER --regularization '3DCNNs' --max_w 1152 --max_h 864 --max_d 192 --interval_scale 1.06``
* Run R-MVSNet (GTX1080Ti): <br>
``python test.py --dense_folder TEST_DATA_FOLDER --regularization 'GRU' --max_w 1600 --max_h 1200 --max_d 256 --interval_scale 0.8``
* Specify your input model check point using ``--pretrained_model_ckpt_path`` and ``--ckpt_step``
* Specify your input dense folder using ``--dense_folder``
* Inspect the .pfm format outputs in ``TEST_DATA_FOLDER/depths_mvsnet`` using ``python visualize.py .pfm``. For example, the depth map and probability map for image `00000012` should look like:
<img src="doc/image.png" width="250"> | <img src="doc/depth_example.png" width="250"> | <img src="doc/probability_example.png" width="250">
:---------------------------------------:|:---------------------------------------:|:---------------------------------------:
reference image |depth map | probability map
### Post-Processing
R/MVSNet itself only produces per-view depth maps. To generate the 3D point cloud, we need to apply depth map filter/fusion for post-processing. As our implementation of this part is depended on the [Altizure](https://www.altizure.com/) internal library, currently we could not provide the corresponding code. Fortunately, depth map filter/fusion is a general step in MVS reconstruction, and there are similar implementations in other open-source MVS algorithms. We provide the script ``depthfusion.py`` to utilize [fusibile](https://github.com/kysucix/fusibile) for post-processing (thank Silvano Galliani for the excellent code!).
To run the post-processing:
* Check out the modified version fusibile ```git clone https://github.com/YoYo000/fusibile```
* Install fusibile by ```cmake .``` and ```make```, which will generate the executable at ``FUSIBILE_EXE_PATH``
* Run post-processing (--prob_threshold 0.8 if using 3DCNNs):
``python depthfusion.py --dense_folder TEST_DATA_FOLDER --fusibile_exe_path FUSIBILE_EXE_PATH --prob_threshold 0.3``
* The final point cloud is stored in `TEST_DATA_FOLDER/points_mvsnet/consistencyCheck-TIME/final3d_model.ply`.
We observe that ``depthfusion.py`` produce similar but quantitatively worse result to our own implementation. For detailed differences, please refer to [MVSNet paper](https://arxiv.org/abs/1804.02505) and [Galliani's paper](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Galliani_Massively_Parallel_Multiview_ICCV_2015_paper.pdf). The point cloud for `scan9` should look like:
<img src="doc/fused_point_cloud.png" width="375"> | <img src="doc/gt_point_cloud.png" width="375">
:--------------------------------------------------:|:----------------------------------------------:
point cloud result
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
深度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文) 深度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文) 深度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文) 深度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文) 深度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文) 深度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文) 深度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文) 度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文) 深度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文) 深度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文) 深度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文) 深度学习三维重建 R-MVSNet——CVPR-2019(源码、原文+译文)
资源推荐
资源详情
资源评论
收起资源包目录
R-MVSNet——CVPR-2019(源码、原文+译文).zip (30个子文件)
MVSNet-master.zip 4.93MB
R-MVSNet--2019.docx 193KB
MVSNet-master
tools
__init__.py 0B
common.py 2KB
cnn_wrapper
__init__.py 0B
mvsnet.py 8KB
network.py 19KB
mvsnet
colmap2mvsnet.py 18KB
preprocess.py 22KB
homography_warping.py 10KB
loss.py 5KB
model.py 23KB
validate.py 10KB
photometric_augmentation.py 5KB
depthfusion.py 8KB
convgru.py 4KB
events.out.tfevents.1582912991.yoyo-altione 37.13MB
train.py 20KB
test.py 12KB
visualize.py 1KB
doc
depth_example.png 106KB
image.png 1.15MB
gt_point_cloud.png 617KB
probability_example.png 162KB
fused_point_cloud.png 549KB
network.png 452KB
LICENSE 1KB
requirements.txt 162B
README.md 13KB
Recurrent MVSNet for High-resolution Multi-view Stereo Depth Inference.pdf 4.67MB
共 30 条
- 1
资源评论
R-G-B
- 粉丝: 1405
- 资源: 114
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功