### PointNet++: *Deep Hierarchical Feature Learning on Point Sets in a Metric Space*
Created by <a href="http://charlesrqi.com" target="_blank">Charles R. Qi</a>, <a href="http://stanford.edu/~ericyi">Li (Eric) Yi</a>, <a href="http://ai.stanford.edu/~haosu/" target="_blank">Hao Su</a>, <a href="http://geometry.stanford.edu/member/guibas/" target="_blank">Leonidas J. Guibas</a> from Stanford University.
![prediction example](https://github.com/charlesq34/pointnet2/blob/master/doc/teaser.jpg)
### Citation
If you find our work useful in your research, please consider citing:
@article{qi2017pointnetplusplus,
title={PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space},
author={Qi, Charles R and Yi, Li and Su, Hao and Guibas, Leonidas J},
journal={arXiv preprint arXiv:1706.02413},
year={2017}
}
### Introduction
This work is based on our NIPS'17 paper. You can find arXiv version of the paper <a href="https://arxiv.org/pdf/1706.02413.pdf">here</a> or check <a href="http://stanford.edu/~rqi/pointnet2">project webpage</a> for a quick overview. PointNet++ is a follow-up project that builds on and extends <a href="https://github.com/charlesq34/pointnet">PointNet</a>. It is version 2.0 of the PointNet architecture.
PointNet (the v1 model) either transforms features of *individual points* independently or process global features of the *entire point set*. However, in many cases there are well defined distance metrics such as Euclidean distance for 3D point clouds collected by 3D sensors or geodesic distance for manifolds like isometric shape surfaces. In PointNet++ we want to respect *spatial localities* of those point sets. PointNet++ learns hierarchical features with increasing scales of contexts, just like that in convolutional neural networks. Besides, we also observe one challenge that is not present in convnets (with images) -- non-uniform densities in natural point clouds. To deal with those non-uniform densities, we further propose special layers that are able to intelligently aggregate information from different scales.
In this repository we release code and data for our PointNet++ classification and segmentation networks as well as a few utility scripts for training, testing and data processing and visualization.
### Installation
Install <a href="https://www.tensorflow.org/install/">TensorFlow</a>. The code is tested under TF1.2 GPU version and Python 2.7 (version 3 should also work) on Ubuntu 14.04. There are also some dependencies for a few Python libraries for data processing and visualizations like `cv2`, `h5py` etc. It's highly recommended that you have access to GPUs.
#### Compile Customized TF Operators
The TF operators are included under `tf_ops`, you need to compile them (check `tf_xxx_compile.sh` under each ops subfolder) first. Update `nvcc` and `python` path if necessary. The code is tested under TF1.2.0. If you are using earlier version it's possible that you need to remove the `-D_GLIBCXX_USE_CXX11_ABI=0` flag in g++ command in order to compile correctly.
To compile the operators in TF version >=1.4, you need to modify the compile scripts slightly.
First, find Tensorflow include and library paths.
TF_INC=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')
TF_LIB=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())')
Then, add flags of `-I$TF_INC/external/nsync/public -L$TF_LIB -ltensorflow_framework` to the `g++` commands.
### Usage
#### Shape Classification
To train a PointNet++ model to classify ModelNet40 shapes (using point clouds with XYZ coordinates):
python train.py
To see all optional arguments for training:
python train.py -h
If you have multiple GPUs on your machine, you can also run the multi-GPU version training (our implementation is similar to the tensorflow <a href="https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10">cifar10 tutorial</a>):
CUDA_VISIBLE_DEVICES=0,1 python train_multi_gpu.py --num_gpus 2
After training, to evaluate the classification accuracies (with optional multi-angle voting):
python evaluate.py --num_votes 12
<i>Side Note:</i> For the XYZ+normal experiment reported in our paper: (1) 5000 points are used and (2) a further random data dropout augmentation is used during training (see commented line after `augment_batch_data` in `train.py` and (3) the model architecture is updated such that the `nsample=128` in the first two set abstraction levels, which is suited for the larger point density in 5000-point samplings.
To use normal features for classification: You can get our sampled point clouds of ModelNet40 (XYZ and normal from mesh, 10k points per shape) <a href="https://shapenet.cs.stanford.edu/media/modelnet40_normal_resampled.zip">here (1.6GB)</a>. Move the uncompressed data folder to `data/modelnet40_normal_resampled`
#### Object Part Segmentation
To train a model to segment object parts for ShapeNet models:
cd part_seg
python train.py
Preprocessed ShapeNetPart dataset (XYZ, normal and part labels) can be found <a href="https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip">here (674MB)</a>. Move the uncompressed data folder to `data/shapenetcore_partanno_segmentation_benchmark_v0_normal`
#### Semantic Scene Parsing
See `scannet/README` and `scannet/train.py` for details.
#### Visualization Tools
We have provided a handy point cloud visualization tool under `utils`. Run `sh compile_render_balls_so.sh` to compile it and then you can try the demo with `python show3d_balls.py` The original code is from <a href="http://github.com/fanhqme/PointSetGeneration">here</a>.
#### Prepare Your Own Data
You can refer to <a href="https://github.com/charlesq34/3dmodel_feature/blob/master/io/write_hdf5.py">here</a> on how to prepare your own HDF5 files for either classification or segmentation. Or you can refer to `modelnet_dataset.py` on how to read raw data files and prepare mini-batches from them. A more advanced way is to use TensorFlow's dataset APIs, for which you can find more documentations <a href="https://www.tensorflow.org/programmers_guide/datasets">here</a>.
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
点云分割_实现pointnet训练点云+语义分割_优质项目实战.zip (66个子文件)
点云分割_实现pointnet训练点云+语义分割_优质项目实战
evaluate.py 7KB
modelnet_dataset.py 5KB
doc
teaser.jpg 523KB
data
README.md 458B
utils
render_balls_so.cpp 2KB
pc_util.py 11KB
pointnet_util.py 12KB
show3d_balls.py 5KB
provider.py 10KB
compile_render_balls_so.sh 103B
tf_util.py 21KB
README.md 142B
tf_ops
sampling
tf_sampling.py 3KB
tf_sampling_g.cu 7KB
tf_sampling_compile.sh 796B
tf_sampling.cpp 9KB
.gitignore 9B
3d_interpolation
interpolate.cpp 5KB
tf_interpolate_compile.sh 644B
tf_interpolate.cpp 11KB
visu_interpolation.py 1KB
tf_interpolate_so.so 45KB
tf_interpolate.py 2KB
tf_interpolate_op_test.py 818B
grouping
tf_grouping_g.cu 5KB
tf_grouping.cpp 10KB
tf_grouping_compile.sh 796B
tf_grouping.py 4KB
tf_grouping_op_test.py 904B
test
compile.sh 300B
query_ball_point_block.cu 4KB
selection_sort_const.cu 2KB
query_ball_point.cpp 4KB
query_ball_point_grid.cu 4KB
query_ball_point.cu 4KB
selection_sort.cpp 2KB
selection_sort.cu 2KB
.gitignore 188B
scannet
scannet_dataset.py 8KB
pc_util.py 14KB
scene_util.py 2KB
train.py 19KB
preprocessing
scannet-labels.combined.tsv 72KB
fetch_label_names.py 680B
collect_scannet_scenes.py 3KB
demo.py 764B
scannet_util.py 813B
README.md 798B
modelnet_h5_dataset.py 4KB
train_multi_gpu.py 15KB
models
pointnet2_sem_seg.py 3KB
pointnet2_part_seg_msg_one_hot.py 3KB
pointnet2_cls_ssg.py 3KB
pointnet_cls_basic.py 3KB
pointnet2_cls_msg.py 2KB
pointnet2_part_seg.py 3KB
train.py 11KB
README.md 6KB
part_seg
command.sh 98B
evaluate.py 8KB
train_one_hot.py 14KB
part_dataset.py 5KB
command_one_hot.sh 157B
part_dataset_all_normal.py 5KB
train.py 13KB
test.py 3KB
共 66 条
- 1
资源评论
- 2301_768928562024-04-20资源简直太好了,完美解决了当下遇到的难题,这样的资源很难不支持~
极智视界
- 粉丝: 2w+
- 资源: 1459
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功