# MVDepthNet
## A Real-time Multiview Depth Estimation Network
Given multiple images and the corresponding camera poses, a cost volume is firstly calculated and then combined with the reference image to generate the depth map. An example is
<img src="fig/example.png" alt="MVDepthNet example" width = "320*2" height = "100">
## 1.0 Prerequisites
+ **PyTorch**
The PyTorch version used in the implementation is 0.3. To use the network in higher versions, only small changes are needed.
+ **OpenCV**
+ **NumPy**
## 2.0 Download the model parameters and the samples
UPDATE: the dropbox link has failed because of the large traffic. This is the BaiduPan link: model weight: ```链接: https://pan.baidu.com/s/1CjV6iWBbjWOxGetf2ZXStQ 提取码: gbfg``` and sample data: ```链接: https://pan.baidu.com/s/1feYfF6qSd7z7_anmR_rgnQ 提取码: g1fo```.
We provide a trained model used in our paper evaluation and some images to run the example code.
Please download the model via [the link](https://www.dropbox.com/s/o1n1w0chlrw4lqt/opensource_model.pth.tar?dl=0) and the sample images via [the link](https://www.dropbox.com/s/hr59f24byc3x8z3/sample_data.pkl.tar.gz?dl=0). Put the model ```opensource_model.pth.tar``` and extract the ```sample_data.pkl.tar.gz``` under the project folder.
## 3.0 Run the example
Just
```python example.py```
## 4.0 Use your own data
To use the network, you need to provide a left image, a right image, camera intrinsic parameters and the relative camera pose. Images are normalized using the mean ```81.0``` and the std ```35.0```, for example
```normalized_image = (image - 81.0)/35.0```.
We here provide the file ```example2.py``` to shown how to run the network using your own data. the ```left_pose``` and ```right_pose``` is the camera pose in the world frame. we show ```left_image```, ```right_image```, and the predicted depth in the final visualization window. A red dot in the ```left_image``` is used to test the relative pose accuracy. The red line in the ```right_image``` is the epiploar line that it much contains the red dot in the ```left_image```. Otherwise, the pose is not accurate. You can change the position of the tested point in line 56.
To get good results, images should have enough **translation** and overlap between each other. Rotation dose not help in the depth estimation.
### 4.1 Use multiple images
Please refer to ```depthNet_model.py```, use the function ```getVolume``` to construct multiple volumes and average them. Input the model with the reference image and the averaged cost volume to get the estimated depth maps.
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
深度估计_使用Pytorch实现的实时多视图深度估计算法_优质项目实战.zip (7个子文件)
深度估计_使用Pytorch实现的实时多视图深度估计算法_优质项目实战
example.py 3KB
example2.py 4KB
README.md 3KB
depthNet_model.py 7KB
visualize.py 663B
fig
example.png 319KB
mvdepthnet_cover.PNG 29KB
共 7 条
- 1
资源评论
极智视界
- 粉丝: 2w+
- 资源: 1468
下载权益
C知道特权
VIP文章
课程特权
开通VIP
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功