<br />
<p align="center">
<img src="docs/artiboost_logo_512ppi.png" alt="icon" width="40%">
<h1 align="center">
Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis
</h1>
<p align="center">
<img src="docs/capture.png"" alt="capture" width="75%">
</p>
<p align="center">
<strong>CVPR, 2022</strong>
<br />
<a href="https://lixiny.github.io"><strong>Lixin Yang * </strong></a>
.
<a href="https://kailinli.top"><strong>Kailin Li *</strong></a>
·
<a href=""><strong>Xinyu Zhan</strong></a>
·
<a href="https://lyuj1998.github.io"><strong>Jun Lv</strong></a>
·
<a href=""><strong>Wenqiang Xu</strong></a>
·
<a href="https://jeffli.site"><strong>Jiefeng Li</strong></a>
·
<a href="https://mvig.sjtu.edu.cn"><strong>Cewu Lu</strong></a>
<br />
\star = equal contribution
</p>
<p align="center">
<a href='https://openaccess.thecvf.com/content/CVPR2022/html/Yang_ArtiBoost_Boosting_Articulated_3D_Hand-Object_Pose_Estimation_via_Online_Exploration_CVPR_2022_paper.html'>
<img src='https://img.shields.io/badge/Paper-PDF-green?style=flat&logo=Googlescholar&logoColor=blue' alt='Paper PDF'>
</a>
<a href='https://arxiv.org/abs/2109.05488' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/ArXiv-PDF-green?style=flat&logo=arXiv&logoColor=green' alt='ArXiv PDF'>
</a>
<a href='https://www.youtube.com/watch?v=QbPsjWRyloY' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/Youtube-Video-red?style=flat&logo=youtube&logoColor=red' alt='Youtube Video'>
</a>
</p>
</p>
<br />
This repo contains models, train, and test codes.
## TODO
- [x] installation guideline
- [x] testing code and pretrained models
- [ ] generating CCV-space
- [ ] training pipeline
## Installation
<a href="https://pytorch.org/get-started/locally/">
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-1.8.1-ee4c2c?logo=pytorch&logoColor=red">
</a>
<a href="https://developer.nvidia.com/cuda-11.1.0-download-archive" style='padding-left: 0.1rem;'>
<img alt="PyTorch" src="https://img.shields.io/badge/CUDA-11.1-yellow?logo=nvidia&logoColor=yellow">
</a>
<a href="https://releases.ubuntu.com/20.04/" style='padding-left: 0.1rem;'>
<img alt="Ubuntu" src="https://img.shields.io/badge/Ubuntu-20.04-green?logo=ubuntu&logoColor=yelgreenlow">
</a>
Following the [Installation Instruction](docs/Installation.md) to setup environment, assets, datasets.
## Evaluation
### HO3Dv2, Heatmap-based model, ArtiBoost
Download checkpoint: [pretrained](https://drive.google.com/file/d/1AEZdR46FslwRWrm0NYUh9h6riO1XwXhO/view?usp=sharing) (`artiboost_ho3dv2_clasbased_100e.pth.tar`) to `./checkpoints`.
Then run:
```shell
$ python train/submit_reload.py --cfg config_eval/eval_ho3dv2_clasbased_artiboost.yaml \
--gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11 --batch_size 100
```
This script yield the (Our _Clas_ + **Arti**) result in main paper Table 2.
- The object's MPCPE socre is stored in `exp/submit_{cfg}_{time}/evaluations/`.
- The HO3Dv2 Codalab submission file will be dumped at: `./exp/submit_{cfg}_{time}/{cfg}_SUBMIT.zip`.
Upload it to the [HO3Dv2 Codalab](https://codalab.lisn.upsaclay.fr/competitions/4318) server and wait for the evaluation to finish.
You can also **visualize the prediction** as the images below:
<p align="center">
<img src="docs/qualitative.png"" alt="capture" width="90%">
</p>
First, you need install extra packages for rendering. Use `pip` to sequentially install:
```
vtk==9.0.1 PyQt5==5.15.4 PyQt5-Qt5==5.15.2 PyQt5-sip==12.8.1 mayavi==4.7.2
```
Second, you need to connect a display window (could be a display monitor, TeamViewer, or VNC server) that supports Qt platform plugin "xcb".
Inside the display window, start a new terminal session and append: `--postprocess_fit_mesh` and `--postprocess_draw` at the end of the shell command,
e.g.
```sh
# HO3Dv2, Heatmap-based model, ArtiBoost
$ python train/submit_reload.py --cfg config_eval/eval_ho3dv2_clasbased_artiboost.yaml \
--gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11 --batch_size 100 \
--postprocess_fit_mesh --postprocess_draw
```
The rendered qualitative results are stored at `exp/submit_{cfg}_{time}/rendered_image/`
### HO3Dv2, Regression-based model, ArtiBoost.
[pretrained](https://drive.google.com/file/d/1RmbQ3jEkvK9yaa-MFVwrAxtzJYqjxMx5/view?usp=sharing) (`artiboost_ho3dv2_regbased_100e.pth.tar`)
```shell
$ python train/submit_reload.py --cfg config_eval/eval_ho3dv2_regbased_artiboost.yaml \
--gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11
```
This script yield the (Our _Reg_ + **Arti**) result in main paper Table 2.
### HO3Dv3, Heatmap-based model, ArtiBoost
[pretrained](https://drive.google.com/file/d/1PGTPki_AYtcJaHog_1EELvJHZJPY8VSn/view?usp=sharing) (`artiboost_ho3dv3_clasbased_200e.pth.tar`)
```shell
$ python train/submit_reload.py --cfg config_eval/eval_ho3dv3_clasbased_artiboost.yaml \
--gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11
```
This script yield the (Our _Clas_ + **Arti**) result in main paper Table 5.
Upload HO3Dv3 Codalab submission file to the [HO3Dv3 codalab](https://codalab.lisn.upsaclay.fr/competitions/4393) server and wait for the evaluation to finish.
### HO3Dv3, Heatmap-based, Object symmetry model, ArtiBoost
[pretrained](https://drive.google.com/file/d/1lCU2hemolkJ7Z7armyYHvJv-yEiQKxYz/view?usp=sharing) (`artiboost_ho3dv3_clasbased_sym_200e.pth.tar`)
```shell
$ python train/submit_reload.py --cfg config_eval/eval_ho3dv3_clasbased_sym_artiboost.yaml \
--gpu_id 0 --submit_dump --filter_unseen_obj_idxs 11
```
This script yield the (Ours _Clas_ sym + **Arti**) result in main paper Table 5.
### DexYCB, Heatmap-based, Object symmetry model, ArtiBoost
[pretrained](https://drive.google.com/file/d/1i49UVkWQtXoaRHjV3JtHQ_l1nO6vMx89/view?usp=share_link) (`artiboost_dexycb_clasbased_sym_100e.pth.tar`)
```shell
$ python train/submit_reload.py --cfg config_eval/eval_dexycb_clasbased_sym_artiboost.yaml --gpu_id 0
```
This script yield the (Ours _Clas_ sym + **Arti**) result in main paper Table 4.
## Generate CCV
## Training Pipeline
## Acknowledge & Citation
```
@inproceedings{yang2021ArtiBoost,
title={{ArtiBoost}: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis},
author={Yang, Lixin and Li, Kailin and Zhan, Xinyu and Lv, Jun and Xu, Wenqiang and Li, Jiefeng and Lu, Cewu},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
```
没有合适的资源?快使用搜索试试~ 我知道了~
[CVPR2022口头]ArtiBoost:通过在线探索和合成提升铰接式3D手部姿势估计.zip
共106个文件
py:90个
yaml:6个
png:3个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 135 浏览量
2023-03-31
22:46:29
上传
评论
收藏 1.05MB ZIP 举报
温馨提示
[CVPR2022口头]ArtiBoost:通过在线探索和合成提升铰接式3D手部姿势估计
资源推荐
资源详情
资源评论
收起资源包目录
[CVPR2022口头]ArtiBoost:通过在线探索和合成提升铰接式3D手部姿势估计.zip (106个子文件)
.gitignore 2KB
.gitkeep 0B
.gitkeep 0B
README.md 7KB
Installation.md 5KB
capture.png 497KB
qualitative.png 343KB
artiboost_logo_512ppi.png 56KB
transform.py 64KB
artiboost_loader.py 26KB
ho3d.py 25KB
transform.py 25KB
fhb.py 17KB
hodata.py 17KB
draw.py 15KB
dexycb.py 14KB
bop_misc.py 13KB
rendered_dataset.py 13KB
vismetric.py 13KB
ordinal.py 13KB
honetMANO.py 13KB
bop_pose_error.py 13KB
fhbutils.py 12KB
val_metric.py 12KB
scrambler.py 12KB
simplebaseline.py 11KB
manolayer.py 11KB
refiner.py 10KB
utils.py 9KB
train_artiboost.py 9KB
ho3dutils.py 9KB
recorder.py 9KB
resnet.py 9KB
hodata_submit_epoch_pass.py 8KB
renderer.py 8KB
hpregnet.py 8KB
fittingunit.py 8KB
bopAR.py 8KB
grasp_engine.py 7KB
frender_utils.py 7KB
ovg_set.py 7KB
opendr_renderer.py 7KB
pckmetric.py 6KB
hybridbaseline.py 6KB
img_augment.py 6KB
mano.py 5KB
io_utils.py 5KB
jointloss.py 5KB
viz_artiboost_render.py 5KB
render_infra.py 5KB
preprocessor.py 5KB
symcornerloss.py 5KB
honetloss.py 4KB
opt.py 4KB
alignloss.py 3KB
misc.py 3KB
submit_reload.py 3KB
visibility.py 3KB
logger.py 3KB
meanepe.py 3KB
evaluator.py 3KB
object_engine.py 3KB
view_engine.py 3KB
registry.py 3KB
submit_epoch_pass.py 3KB
builder.py 3KB
arch.py 2KB
func.py 2KB
criterion.py 2KB
chamferloss.py 2KB
netutils.py 2KB
cache_recorder.py 2KB
summarizer.py 2KB
checkpoints.py 2KB
hoquery.py 2KB
lossesmetric.py 1KB
metric.py 1KB
model.py 1KB
mixed_dataset.py 1KB
misc.py 937B
mlp.py 702B
opt_extra.py 627B
__init__.py 553B
__init__.py 462B
etqdm.py 437B
__init__.py 393B
hand_texture.py 303B
setup.py 232B
common.py 213B
__init__.py 129B
__init__.py 125B
__init__.py 103B
__init__.py 46B
__init__.py 36B
__init__.py 0B
__init__.py 0B
__init__.py 0B
__init__.py 0B
requirements.txt 3KB
ho3dv2_clasbased_jlol_artiboost2.yaml 3KB
共 106 条
- 1
- 2
资源评论
快撑死的鱼
- 粉丝: 1w+
- 资源: 9154
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- linux常用命令大全.zip
- CATIA入门操作案例-正八边异形带孔凸台绘制,等距点绘制正多边形,凸台绘制
- 大型代码语言模型的项目级提示生成pdf
- 纸片战争The War of paper.sb3
- testedtestedtested
- 基于C语言+STM32开发的智能门锁优秀项目+包含指纹识别、人脸识别、RFID解锁、密码解锁、蓝牙解锁功能+源码+项目解析
- 目标检测-智能零售柜商品检测数据集-5000张图-+对应VOC-COCO-YOLO三种格式标签+数据集划分脚本
- 目标检测-智能零售柜商品检测数据集-3000张图-+对应VOC-COCO-YOLO三种格式标签+数据集划分脚本
- 目标检测-智能零售柜商品检测数据集-1000张图-+对应VOC-COCO-YOLO三种格式标签+数据集划分脚本
- 蓝果小镇电商商城-带管理台-后台php完整版
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功