# Introduction
Implicitron is a PyTorch3D-based framework for new-view synthesis via modeling the neural-network based representations.
# License
Implicitron is distributed as part of PyTorch3D under the [BSD license](https://github.com/facebookresearch/pytorch3d/blob/main/LICENSE).
It includes code from the [NeRF](https://github.com/bmild/nerf), [SRN](http://github.com/vsitzmann/scene-representation-networks) and [IDR](http://github.com/lioryariv/idr) repos.
See [LICENSE-3RD-PARTY](https://github.com/facebookresearch/pytorch3d/blob/main/LICENSE-3RD-PARTY) for their licenses.
# Installation
There are three ways to set up Implicitron, depending on the flexibility level required.
If you only want to train or evaluate models as they are implemented changing only the parameters, you can just install the package.
Implicitron also provides a flexible API that supports user-defined plug-ins;
if you want to re-implement some of the components without changing the high-level pipeline, you need to create a custom launcher script.
The most flexible option, though, is cloning PyTorch3D repo and building it from sources, which allows changing the code in arbitrary ways.
Below, we descibe all three options in more details.
## [Option 1] Running an executable from the package
This option allows you to use the code as is without changing the implementations.
Only configuration can be changed (see [Configuration system](#configuration-system)).
For this setup, install the dependencies and PyTorch3D from conda following [the guide](https://github.com/facebookresearch/pytorch3d/blob/master/INSTALL.md#1-install-with-cuda-support-from-anaconda-cloud-on-linux-only). Then, install implicitron-specific dependencies:
```shell
pip install "hydra-core>=1.1" visdom lpips matplotlib accelerate
```
Runner executable is available as `pytorch3d_implicitron_runner` shell command.
See [Running](#running) section below for examples of training and evaluation commands.
## [Option 2] Supporting custom implementations
To plug in custom implementations, for example, of renderer or implicit-function protocols, you need to create your own runner script and import the plug-in implementations there.
First, install PyTorch3D and Implicitron dependencies as described in the previous section.
Then, implement the custom script; copying `pytorch3d/projects/implicitron_trainer` is a good place to start.
See [Custom plugins](#custom-plugins) for more information on how to import implementations and enable them in the configs.
## [Option 3] Cloning PyTorch3D repo
This is the most flexible way to set up Implicitron as it allows changing the code directly.
It allows modifying the high-level rendering pipeline or implementing yet-unsupported loss functions.
Please follow the instructions to [install PyTorch3D from a local clone](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md#2-install-from-a-local-clone).
Then, install Implicitron-specific dependencies:
```shell
pip install "hydra-core>=1.1" visdom lpips matplotlib accelerate
```
You are still encouraged to implement custom plugins as above where possible as it makes reusing the code easier.
The executable is located in `pytorch3d/projects/implicitron_trainer`.
> **_NOTE:_** Both `pytorch3d_implicitron_runner` and `pytorch3d_implicitron_visualizer`
executables (mentioned below) are not available when using local clone.
Instead users should use the python scripts `experiment.py` and `visualize_reconstruction.py` (see the [Running](Running) section below).
# Running
This section assumes that you use the executable provided by the installed package
(Option 1 / Option 2 in [#Installation](Installation) above),
i.e. `pytorch3d_implicitron_runner` and `pytorch3d_implicitron_visualizer` are available.
> **_NOTE:_** If the executables are not available (e.g. when using a local clone - Option 3 in [#Installation](Installation)),
users should directly use the `experiment.py` and `visualize_reconstruction.py` python scripts
which correspond to the executables as follows:
- `pytorch3d_implicitron_runner` corresponds to `<pytorch3d_root>/projects/implicitron_trainer/experiment.py`
- `pytorch3d_implicitron_visualizer` corresponds to `<pytorch3d_root>/projects/implicitron_trainer/visualize_reconstruction.py`
For instance, in order to directly execute training with the python script, users can call:
```shell
cd <pytorch3d_root>/projects/
python -m implicitron_trainer.experiment <args>`
```
If you have a custom `experiment.py` or `visualize_reconstruction.py` script
(as in the Option 2 [above](#Installation)), replace the executable with the path to your script.
## Training
To run training, pass a yaml config file, followed by a list of overridden arguments.
For example, to train NeRF on the first skateboard sequence from CO3D dataset, you can run:
```shell
dataset_args=data_source_ImplicitronDataSource_args.dataset_map_provider_JsonIndexDatasetMapProvider_args
pytorch3d_implicitron_runner --config-path ./configs/ --config-name repro_singleseq_nerf \
$dataset_args.dataset_root=<DATASET_ROOT> $dataset_args.category='skateboard' \
$dataset_args.test_restrict_sequence_id=0 test_when_finished=True exp_dir=<CHECKPOINT_DIR>
```
Here, `--config-path` points to the config path relative to `pytorch3d_implicitron_runner` location;
`--config-name` picks the config (in this case, `repro_singleseq_nerf.yaml`);
`test_when_finished` will launch evaluation script once training is finished.
Replace `<DATASET_ROOT>` with the location where the dataset in Implicitron format is stored
and `<CHECKPOINT_DIR>` with a directory where checkpoints will be dumped during training.
Other configuration parameters can be overridden in the same way.
See [Configuration system](#configuration-system) section for more information on this.
### Visdom logging
Note that the training script logs its progress to Visdom. Make sure to start a visdom server before the training commences:
```
python -m visdom.server
```
> In case a Visdom server is not started, the console will get flooded with `requests.exceptions.ConnectionError` errors signalling that a Visdom server is not available. Note that these errors <b>will NOT interrupt</b> the program and the training will still continue without issues.
## Evaluation
To run evaluation on the latest checkpoint after (or during) training, simply add `eval_only=True` to your training command.
E.g. for executing the evaluation on the NeRF skateboard sequence, you can run:
```shell
dataset_args=data_source_ImplicitronDataSource_args.dataset_map_provider_JsonIndexDatasetMapProvider_args
pytorch3d_implicitron_runner --config-path ./configs/ --config-name repro_singleseq_nerf \
$dataset_args.dataset_root=<CO3D_DATASET_ROOT> $dataset_args.category='skateboard' \
$dataset_args.test_restrict_sequence_id=0 exp_dir=<CHECKPOINT_DIR> eval_only=True
```
Evaluation prints the metrics to `stdout` and dumps them to a json file in `exp_dir`.
## Visualisation
The script produces a video of renders by a trained model assuming a pre-defined camera trajectory.
In order for it to work, `ffmpeg` needs to be installed:
```shell
conda install ffmpeg
```
Here is an example of calling the script:
```shell
pytorch3d_implicitron_visualizer exp_dir=<CHECKPOINT_DIR> \
visdom_show_preds=True n_eval_cameras=40 render_size="[64,64]" video_size="[256,256]"
```
The argument `n_eval_cameras` sets the number of renderring viewpoints sampled on a trajectory, which defaults to a circular fly-around;
`render_size` sets the size of a render passed to the model, which can be resized to `video_size` before writing.
Rendered videos of images, masks, and depth maps will be saved to `<CHECKPOINT_DIR>/video`.
# Configuration system
We use hydra and OmegaConf to parse the configs.
The config schema and default values are defined by the dataclasses implementing the modules.
More specifically, if a class derives from `Configurable`, i
没有合适的资源?快使用搜索试试~ 我知道了~
pytorch3d-main.zip
共1062个文件
py:413个
png:220个
rst:92个
0 下载量 109 浏览量
2024-03-04
10:02:48
上传
评论
收藏 35.28MB ZIP 举报
温馨提示
PyTorch3D是一个专门为三维(3D)计算机视觉和深度学习任务设计的开源库,构建于PyTorch之上。它提供了一套丰富的工具集,旨在简化三维数据的处理、建模和分析,特别是在深度学习领域内的应用。
资源推荐
资源详情
资源评论
收起资源包目录
pytorch3d-main.zip (1062个子文件)
pkg_helpers.bash 16KB
install_runtime.bat 2KB
install_runtime.bat 2KB
install_activate.bat 2KB
install_activate.bat 2KB
activate.bat 1KB
activate.bat 1KB
install_conda.bat 331B
setup.cfg 385B
.clang-format 2KB
renderer.cpp 55KB
rasterize_meshes_cpu.cpp 25KB
point_mesh_cpu.cpp 14KB
points_to_volumes_cpu.cpp 13KB
rasterize_points_cpu.cpp 9KB
face_areas_normals_cpu.cpp 8KB
ext.cpp 7KB
sigmoid_alpha_blend_cpu.cpp 5KB
iou_box3d_cpu.cpp 5KB
norm_weighted_sum_cpu.cpp 5KB
sample_pdf_cpu.cpp 4KB
alpha_composite_cpu.cpp 4KB
knn_cpu.cpp 4KB
marching_cubes_cpu.cpp 4KB
sample_farthest_points_cpu.cpp 3KB
weighted_sum_cpu.cpp 3KB
camera.cpp 2KB
packed_to_padded_tensor_cpu.cpp 2KB
tensor_util.cpp 2KB
mesh_normal_consistency_cpu.cpp 2KB
ball_query_cpu.cpp 2KB
gather_scatter_cpu.cpp 1KB
util.cpp 724B
warnings.cpp 553B
renderer.norm_sphere_gradients.cpu.cpp 288B
renderer.norm_cam_gradients.cpu.cpp 285B
renderer.create_selector.cpu.cpp 282B
renderer.calc_gradients.cpu.cpp 281B
renderer.calc_signature.cpu.cpp 281B
renderer.backward_dbg.cpu.cpp 279B
renderer.construct.cpu.cpp 276B
renderer.backward.cpu.cpp 275B
renderer.destruct.cpu.cpp 275B
renderer.fill_bg.cpu.cpp 274B
renderer.forward.cpu.cpp 274B
renderer.render.cpu.cpp 273B
custom.css 5KB
pygments.css 4KB
point_mesh_cuda.cu 30KB
rasterize_meshes.cu 29KB
knn.cu 22KB
marching_cubes.cu 20KB
rasterize_points.cu 16KB
rasterize_coarse.cu 14KB
points_to_volumes.cu 14KB
face_areas_normals.cu 12KB
norm_weighted_sum.cu 9KB
alpha_composite.cu 9KB
sample_farthest_points.cu 9KB
packed_to_padded_tensor.cu 8KB
weighted_sum.cu 8KB
sigmoid_alpha_blend.cu 8KB
interp_face_attrs.cu 6KB
iou_box3d.cu 6KB
sample_pdf.cu 6KB
ball_query.cu 5KB
gather_scatter.cu 3KB
renderer.norm_sphere_gradients.gpu.cu 288B
renderer.norm_cam_gradients.gpu.cu 285B
renderer.create_selector.gpu.cu 282B
renderer.calc_gradients.gpu.cu 281B
renderer.calc_signature.gpu.cu 281B
renderer.backward_dbg.gpu.cu 279B
renderer.construct.gpu.cu 276B
renderer.backward.gpu.cu 275B
renderer.destruct.gpu.cu 275B
renderer.forward.gpu.cu 274B
renderer.fill_bg.gpu.cu 274B
renderer.render.gpu.cu 273B
geometry_utils.cuh 27KB
iou_utils.cuh 22KB
dispatch.cuh 10KB
index_utils.cuh 5KB
mink.cuh 5KB
float_math.cuh 5KB
rasterization_utils.cuh 3KB
warp_reduce.cuh 2KB
bitmask.cuh 2KB
.dockerignore 21B
.flake8 349B
implicitron_config.gif 4.98MB
nerf_project_logo.gif 4.89MB
fit_nerf.gif 4.36MB
fit_textured_volume.gif 3.54MB
bundle_adjust.gif 2.01MB
camera_position_teapot.gif 2MB
batch_modes.gif 1.45MB
cow_deform.gif 882KB
render_textured_mesh.gif 742KB
dolphin_deform.gif 648KB
共 1062 条
- 1
- 2
- 3
- 4
- 5
- 6
- 11
资源评论
神气仙人
- 粉丝: 2072
- 资源: 101
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功