# Point-NeRF: Point-based Neural Radiance Fields (CVPR 2022 Oral 🤩)
<img src="./images/Adobe-Logos.png" width=120px /><img src="images/USC-Logos.png" width=120px />
[Project Sites](https://xharlie.github.io/projects/project_sites/pointnerf/index.html)
| [Paper](https://arxiv.org/pdf/2201.08845.pdf) |
Primary contact: [Qiangeng Xu](https://xharlie.github.io/)
Point-NeRF uses neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be finetuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism.
<!-- <img src="./images/pipeline.png" /> -->
[![CVPR 2022 Oral Presentation](https://github.com/Xharlie/pointnerf/blob/master/images/youtube.png)](https://youtu.be/zmR9j-4AebA)
## Reference
Please cite our paper if you are interested
<strong>Point-NeRF: Point-based Neural Radiance Fields</strong>.
```
@inproceedings{xu2022point,
title={Point-nerf: Point-based neural radiance fields},
author={Xu, Qiangeng and Xu, Zexiang and Philip, Julien and Bi, Sai and Shu, Zhixin and Sunkavalli, Kalyan and Neumann, Ulrich},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5438--5448},
year={2022}
}
```
## Updates ##
1. To replace pycuda, we have implemented the pytorch cuda functions when using world coordinates to group neural points. Simply set wcoord_query=-1
in your configuration file if the original setting is wcoord_query=1 (see dev_scripts/w_n360/chair_cuda.sh).
2. We have received constructive feedbacks that when Point-NeRF use MVSNet to reconstruct point cloud, the point fusion after depth estimation by MVSNet will use the alpha channel information in the NeRF-Synthetic Dataset. It is due to the fact that MVSNet cannot handle background very well. To improve the fairness, we include new training scripts and results of PointNeRF + MVSNet when using background color for filtering. The results (see below) are similar to the ones that are previously reported.
| | Chair | Drums | Lego | Mic | Materials | Ship | Hotdog | Ficus | Avg |
| ---- | ---- | ---- | --- | ---- | ---- | ---- | ------- | ------- |------- |
| PSNR | 35.60 | 26.04 | 35.27 | 35.91 | 29.65 | 30.61 | 37.34 | 35.61 | 33.25 |
| SSIM | 0.991 | 0.954 | 0.989 | 0.994 | 0.971 | 0.938 | 0.991 | 0.992 | 0.978 |
| LPIPSVgg | 0.023 | 0.078 | 0.021 | 0.014 | 0.071 | 0.129 | 0.036 | 0.025 | 0.050 |
| LPIPSAlex | 0.010 | 0.055 | 0.010 | 0.007 | 0.041 | 0.076 | 0.016 | 0.011 | 0.028 |
This issue only affacts situations when Point-NeRF uses MVSNet on NeRF-Synthetic Dataset. The Colmap results and results on other datasets are not impacted.
An even more reasonable reconstruction approach should exclude using the knowledge of background color or other point filtering. Therefore, we suggest users to combine PointNeRF with more powerful MVS models, such as [TransMVS](https://github.com/megvii-research/TransMVSNet).
## Overal Instruction
1. Please first install the libraries as below and download/prepare the datasets as instructed.
2. Point Initialization: Download pre-trained MVSNet as below and train the feature extraction from scratch or directly download the pre-trained models. (Obtain 'MVSNet' and 'init' folder in checkpoints folder)
3. Per-scene Optimization: Download pre-trained models or optimize from scratch as instructed.
For nerfsynthetic, colmap_nerfsynthetic, tanks&temples, scannet and dtu,
We provide all the checkpoint_files [google drive](https://drive.google.com/drive/folders/1xk1GhDhgPk1MrlX8ncfBz5hNMvSa9vS6?usp=sharing) | [baidu wangpan](https://pan.baidu.com/s/1doJHI03Tgl_qIquGZuW5bw?pwd=p8bs); all the images and scores of the test results [google drive](https://drive.google.com/drive/folders/1KAYs7XuBJNMTHVBuOCtpLNv9P8UMoayw?usp=sharing) | [baidu wangpan](https://pan.baidu.com/s/1BMewWRSIkNFlp7DKYmx9vQ?pwd=3yse); and video results [google drive](https://drive.google.com/drive/folders/1dutZEZO9vfeIbfWwplbIIam7YBeyZ0dY?usp=sharing) | [baidu wangpan](https://pan.baidu.com/s/1kC1qSL5dkT8cDdE3dHTc2A?pwd=j46j);
We also share the visual results of [npbg](https://github.com/alievk/npbg), [nsvf](https://github.com/facebookresearch/NSVF) and [ibrnet](https://github.com/googleinterns/IBRNet) on the Nerf Synthetic dataset generated by our machine [google drive](https://drive.google.com/drive/folders/1KHhljnqLvIJkRkaqQ8TaeBZirMsnDAhf?usp=sharing); Please cite their papers accordingly if interested.
## Installation
### Requirements
All the codes are tested in the following environment:
* Linux (tested on Ubuntu 16.04, 18.04, 20.04)
* Python 3.6+
* PyTorch 1.7 or higher (tested on PyTorch 1.7, 1.8.1, 1.9, 1.10)
* CUDA 10.2 or higher
### Install
Install the dependent libraries as follows:
* Install the dependent python libraries:
```
pip install torch==1.8.1+cu102 h5py
pip install imageio scikit-image
```
* Install pycuda (crucial) following:
https://documen.tician.de/pycuda/install.html
* Install torch_scatter following:
https://github.com/rusty1s/pytorch_scatter
We develope our code with pytorch1.8.1, pycuda2021.1, and torch_scatter 2.0.8
## Data Preparation
The layout should looks like this, we provide all data folder here: [google_drive](https://drive.google.com/drive/folders/1kqbbdbbN1bQnwYglRe4iV8dKnyCvoOFS?usp=sharing)
```
pointnerf
├── data_src
│ ├── dtu
│ │ │──Cameras
│ │ │──Depths
│ │ │──Depths_raw
│ │ │──Rectified
├── nerf
│ │ │──nerf_synthetic
│ │ │──nerf_synthetic_colmap
├── TanksAndTemple
├── scannet
│ │ │──scans
| │ │ │──scene0101_04
| │ │ │──scene0241_01
```
Or you can download using the official links as follows:
## DTU:
Download the preprocessed [DTU training data](https://drive.google.com/file/d/1eDjh-_bxKKnEuz5h-HXS7EDJn59clx6V/view)
and [Depth_raw](https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/cascade-stereo/CasMVSNet/dtu_data/dtu_train_hr/Depths_raw.zip) from original [MVSNet repo](https://github.com/YoYo000/MVSNet)
and unzip.
## NeRF Synthetic
Download `nerf_synthetic.zip` from [here](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1) under ``data_src/nerf/''
## Tanks & Temples
Follow Neural Sparse Voxel Fields and download [Tanks&Temples](https://www.tanksandtemples.org/) | [download (.zip)](https://dl.fbaipublicfiles.com/nsvf/dataset/TanksAndTemple.zip) | 0_\* (training) 1_\* (testing) as:
``data_src/TanksAndTemple/''
## ScanNet
Download and extract ScanNet by following the instructions provided at http://www.scan-net.org/. The detailed steps including:
* Go to http://www.scan-net.org and fill & sent the request form.
* You will get a email that has command instruction and a download-scannet.py file, this file is for python 2, you can use our download-scannet.py in the ``data'' directory for python 3.
* clone the official repo:
```
git clone https://github.com/ScanNet/ScanNet.git
```
* Download specific scenes (used by NSVF):
```
python data/download-scannet.py -o ../data_src/scannet/ id scene0101_04
python data/download-scannet.py -o ../data_src/scannet/ id scene0241_01
```
* Process the sens files:
```
python ScanNet/SensReader/python/reader.py --filename data_src/nrData/scannet/scans/scene0101_04/scene0101_04.sens --output_path data_src/nrData/scannet/scans/scene0101_04/exported/ --export_depth_images --expor
没有合适的资源?快使用搜索试试~ 我知道了~
point-nerf复现代码
共1319个文件
png:1078个
sh:58个
py:58个
需积分: 0 138 下载量 173 浏览量
2023-05-30
16:54:48
上传
评论 13
收藏 452.37MB ZIP 举报
温馨提示
point-nerf复现代码,github上提供的源码比较乱,而且在复现的时候一直出现bug,于是在测试point-nerf时将里面的代码进行了更改。在源码的基础上,将nerf_synth360_ft_dataset.py;evaluate.py;test_ft.py;visualizer.py这四个py文件进行了简单的修改。 由于文件容量的限制,这个文件里面少了一个数据包,大家可以下载nerf_synthetic数据包将其放到data_src/nerf/中。注:nerf中包含两个文件夹nerf_synthetic和nerf_synthetic_colmap,nerf_synthetic_colmap在这个资源里有,只需要下载nerf_synthetic数据包就行,nerf_synthetic数据包下载网址:https://drive.google.com/drive/folders/12lcYHJUaYGcylBN9f0VZIOW1Alln349k。
资源推荐
资源详情
资源评论
收起资源包目录
point-nerf复现代码 (1319个子文件)
images.bin 3.97MB
points3D.bin 1.24MB
cameras.bin 56B
model_000015.ckpt 3.94MB
model_000014.ckpt 3.91MB
config 261B
query_worldcoords.cpp 2KB
query_worldcoords.cu 20KB
database.db 31.44MB
description 73B
exclude 240B
video_200000_coarse_raycolor.gif 20.94MB
.gitignore 1KB
.gitignore 182B
HEAD 169B
HEAD 169B
HEAD 32B
HEAD 23B
pack-ae978c358978813b5336e1a00d51d0e52578a754.idx 11KB
pointnerf.iml 490B
index 14KB
config.ini 157B
transforms_test.json 171KB
transforms_train.json 86KB
transforms_val.json 86KB
master 169B
master 41B
README.md 15KB
LICENSE.md 14KB
video_200000_coarse_raycolor.mov 17.5MB
pack-ae978c358978813b5336e1a00d51d0e52578a754.pack 2.09MB
packed-refs 114B
meshed-poisson-unedited.ply 20.81MB
fused.ply 4.8MB
youtube.png 1.37MB
r_63.png 469KB
r_74.png 469KB
r_83.png 468KB
r_82.png 467KB
r_59.png 463KB
r_85.png 461KB
r_79.png 459KB
r_41.png 459KB
r_77.png 457KB
r_71.png 455KB
r_61.png 455KB
r_41.png 454KB
r_40.png 454KB
r_39.png 454KB
r_40.png 452KB
r_12.png 452KB
r_42.png 452KB
r_62.png 452KB
r_38.png 451KB
r_75.png 450KB
r_78.png 449KB
r_43.png 448KB
pipeline.png 448KB
r_37.png 447KB
r_99.png 447KB
r_44.png 445KB
r_47.png 444KB
r_36.png 443KB
r_45.png 442KB
r_90.png 442KB
r_46.png 441KB
r_47.png 441KB
r_48.png 439KB
r_35.png 436KB
r_49.png 435KB
r_4.png 435KB
r_3.png 433KB
r_50.png 433KB
r_67.png 431KB
r_7.png 430KB
r_149.png 429KB
r_148.png 429KB
r_151.png 429KB
r_51.png 429KB
r_8.png 429KB
r_34.png 429KB
r_150.png 428KB
r_39.png 428KB
r_51.png 428KB
r_66.png 427KB
r_152.png 427KB
r_147.png 426KB
r_78.png 426KB
r_153.png 425KB
r_49.png 424KB
r_58.png 424KB
r_52.png 424KB
r_85.png 424KB
r_146.png 423KB
r_154.png 421KB
r_145.png 421KB
r_58.png 421KB
r_142.png 420KB
r_143.png 420KB
r_33.png 420KB
共 1319 条
- 1
- 2
- 3
- 4
- 5
- 6
- 14
资源评论
XINYUW
- 粉丝: 121
- 资源: 9
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功