# Monocular Total Capture
Code for CVPR19 paper "Monocular Total Capture: Posing Face, Body and Hands in the Wild"
![Teaser Image](https://xiangdonglai.github.io/MTC_teaser.jpg)
Project website: [<http://domedb.perception.cs.cmu.edu/mtc.html>]
# Dependencies
This code is tested on a Ubuntu 16.04 machine with a GTX 1080Ti GPU, with the following dependencies.
1. ffmpeg
2. Python 3.5 (with TensorFlow 1.5.0, OpenCV, Matplotlib, packages installed with pip3)
3. cmake >= 2.8
4. OpenCV 2.4.13 (compiled from source with CUDA 9.0, CUDNN 7.0)
5. Ceres-Solver 1.13.0 (with SuiteSparse)
6. OpenGL, GLUT, GLEW
7. libigl <https://github.com/libigl/libigl>
8. wget
9. OpenPose
# Installation
1. git clone this repository; suppose the main directory is ${ROOT} on your local machine.
2. "cd ${ROOT}"
3. "bash download.sh"
4. git clone OpenPose <https://github.com/CMU-Perceptual-Computing-Lab/openpose> and compile. Suppose the main directory of OpenPose is ${openposeDir}, such that the compiled binary is at ${openposeDir}/build/examples/openpose/openpose.bin
5. Edit ${ROOT}/run_pipeline.sh: set line 13 to you ${openposeDir}
4. Edit ${ROOT}/FitAdam/CMakeLists.txt: set line 13 to the "include" directory of libigl (this is a header only library)
5. "cd ${ROOT}/FitAdam/ && mkdir build && cd build"
6. "cmake .."
7. "make -j12"
# Usage
1. Suppose the video to be tested is named "${seqName}.mp4". Place it in "${ROOT}/${seqName}/${seqName}.mp4".
2. If the camera intrinsics is known, put it in "${ROOT}/${seqName}/calib.json" (refer to "POF/calib.json" for example); otherwise, a default camera intrinsics will be used.
3. In ${ROOT}, run "bash run_pipeline.sh ${seqName}"; if the subject in the video shows only upper body, run "bash run_pipeline.sh ${seqName} -f".
# Examples
"download.sh" automatically download 2 example videos to test. After successful installation run
```
bash run_pipeline.sh example_dance
```
or
```
bash run_pipeline.sh example_speech -f
```
# License and Citation
This code can only be used for **non-commercial research purposes**. If you use this code in your research, please cite the following papers.
```
@inproceedings{xiang2019monocular,
title={Monocular total capture: Posing face, body, and hands in the wild},
author={Xiang, Donglai and Joo, Hanbyul and Sheikh, Yaser},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2019}
}
@inproceedings{joo2018total,
title={Total capture: A 3d deformation model for tracking faces, hands, and bodies},
author={Joo, Hanbyul and Simon, Tomas and Sheikh, Yaser},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2018}
}
```
Some part of this code is modified from [lmb-freiburg/hand3d](https://github.com/lmb-freiburg/hand3d).
# Adam Model
We use the deformable human model [**Adam**](http://www.cs.cmu.edu/~hanbyulj/totalcapture/) in this code.
**The relationship between Adam and SMPL:** The body part of Adam is derived from [SMPL](http://smpl.is.tue.mpg.de/license_body) model by Loper et al. 2015. It follows SMPL's body joint hierarchy, but uses a different joint regressor. Adam does not contain the original SMPL model's shape and pose blendshapes, but uses its own version trained from Panoptic Studio database.
**The relationship between Adam and FaceWarehouse:** The face part of Adam is derived from [FaceWarehouse](http://kunzhou.net/zjugaps/facewarehouse/). In particular, the mesh topology of face of Adam is a modified version of the learned model from FaceWarehouse dataset. Adam does not contain the blendshapes of the original FaceWarehouse data, and facial expression of Adam model is unavailable due to copyright issues.
The Adam model is shared for research purpose only, and cannot be used for commercial purpose. Redistributing the original or modified version of Adam is also not allowed without permissions.
# Special Notice
1. In our code, the output of ceres::AngleAxisToRotationMatrix is always a RowMajor matrix, while the function is designed for a ColMajor matrix. To account for this, please treat our output pose parameters as the opposite value. In other words, before exporting our pose parameter to other softwares, please multiply them by -1.
没有合适的资源?快使用搜索试试~ 我知道了~
Python-单目3D人体姿态检测
共105个文件
h:29个
cpp:21个
vertexshader:13个
1星 需积分: 48 47 下载量 94 浏览量
2019-08-09
18:26:53
上传
评论 4
收藏 2.08MB ZIP 举报
温馨提示
Code for CVPR19 paper "Monocular Total Capture: Posing Face, Body and Hands in the Wild"
资源推荐
资源详情
资源评论
收起资源包目录
Python-单目3D人体姿态检测 (105个子文件)
FindGFlags.cmake 2KB
AdamFastCost.cpp 87KB
totalmodel.cpp 77KB
FitToBody.cpp 74KB
handm.cpp 57KB
HandFastCost.cpp 55KB
Renderer.cpp 48KB
json_value.cpp 38KB
ModelFitter.cpp 34KB
run_fitting.cpp 33KB
pose_to_transforms.cpp 29KB
json_reader.cpp 20KB
json_writer.cpp 20KB
meshTrackingProj.cpp 18KB
FKDerivative.cpp 17KB
KinematicModel.cpp 16KB
SGSmooth.cpp 16KB
simple.cpp 10KB
CMeshModelInstance.cpp 8KB
BVHWriter.cpp 7KB
utils.cpp 5KB
DCTCost.cpp 3KB
ProjectorOnTexture.fragmentshader 3KB
ProjectorSimulation.fragmentshader 2KB
StandardLighting.fragmentshader 2KB
Mesh_texture_shader.fragmentshader 1KB
Mesh.fragmentshader 1KB
TextureFragmentShader.fragmentshader 291B
Mesh_texture.fragmentshader 237B
Selection.fragmentshader 197B
NormalMap.fragmentshader 164B
Project2D.fragmentshader 135B
SingleColor.fragmentshader 98B
SimplestShader.fragmentshader 89B
DepthMapRender.fragmentshader 30B
.gitignore 71B
.gitignore 59B
FitCost.h 69KB
AdamFastCost.h 46KB
value.h 33KB
rply.h 15KB
pose_to_transforms.h 13KB
simple.h 11KB
VisualizedData.h 8KB
reader.h 6KB
totalmodel.h 6KB
writer.h 6KB
HandFastCost.h 6KB
handm.h 5KB
ModelFitter.h 4KB
FitToBody.h 4KB
json_batchallocator.h 4KB
FKDerivative.h 3KB
rplyfile.h 3KB
DCTCost.h 3KB
Renderer.h 2KB
config.h 2KB
meshTrackingProj.h 1KB
BVHWriter.h 1KB
features.h 1KB
CMeshModelInstance.h 1KB
KinematicModel.h 818B
forwards.h 735B
autolink.h 438B
utils.h 401B
json.h 200B
SGSmooth.hpp 420B
json_internalmap.inl 16KB
json_internalarray.inl 12KB
json_valueiterator.inl 7KB
adam_v1_plus2.json 314KB
regressor_0n1_root.json 3KB
default_PAF_lengths.json 897B
calib.json 46B
README.md 4KB
mesh_nofeet.obj 3.26MB
nofeetmesh_byTomas_bottom.obj 2.88MB
save_total_sequence.py 30KB
PAF.py 20KB
general.py 9KB
CPM.py 9KB
ops.py 8KB
keypoint_conversion.py 8KB
load_ckpt.py 3KB
smoothing.py 3KB
collect_openpose.py 2KB
sconscript 146B
run_pipeline.sh 2KB
download.sh 654B
librply.so 48KB
correspondences_nofeet.txt 531KB
CMakeLists.txt 2KB
ProjectorOnTexture.vertexshader 2KB
ProjectorSimulation.vertexshader 2KB
StandardLighting.vertexshader 2KB
SimpleTransform.vertexshader 565B
Mesh.vertexshader 536B
Selection.vertexshader 518B
TransformVertexShader.vertexshader 503B
Mesh_texture_shader.vertexshader 387B
共 105 条
- 1
- 2
资源评论
- obarapin2020-07-06谢谢提供,但是无法用
weixin_39841882
- 粉丝: 443
- 资源: 1万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功