# 基于GLA-GCN的人体动作识别算法内含数据集以及预训练模型
## Introduction
3D human pose estimation has been researched for decades with promising fruits. 3D human pose lifting is one of the promising research directions toward the task where both estimated pose and ground truth pose data are used for training. Existing pose lifting works mainly focus on improving the performance of estimated pose, but they usually underperform when testing on the ground truth pose data. We observe that the performance of the estimated pose can be easily improved by preparing good quality 2D pose, such as fine-tuning the 2D pose or using advanced 2D pose detectors. As such, we concentrate on improving the 3D human pose lifting via ground truth data for the future improvement of more quality estimated pose data.
Towards this goal, a simple yet effective model called Global-local Adaptive Graph Convolutional Network (GLA-GCN) is proposed in this work. Our GLA-GCN globally models the spatiotemporal structure via a graph representation and backtraces local joint features for 3D human pose estimation via individually connected layers.
We conduct extensive experiments on two benchmark datasets: Human3.6M and HumanEva-I, to validate our model design. Experimental results show that our GLA-GCN implemented with ground truth 2D poses significantly outperforms state-of-the-art methods (e.g., up to 3%, 17\%, and 13% error reductions on Human3.6M, HumanEva-I, and MPI-INF-3DHP, respectively).
## Visualization and Comparison with SOTA
<div align="center">
<a href="https://youtu.be/AGNFxQ5O8xM?t=23s">
<img
src="figures/video.png"
alt="Everything Is AWESOME"
style="width:100%;">
</a>
</div>
### Environment
The code is developed and tested on the following environment
* Python 3.8
* PyTorch 1.8 or higher
* CUDA 11
### Dataset
The source code is for training/evaluating on the [Human3.6M](http://vision.imar.ro/human3.6m) dataset. Our code is compatible with the dataset setup introduced by [Martinez et al.](https://github.com/una-dinosauria/3d-pose-baseline) and [Pavllo et al.](https://github.com/facebookresearch/VideoPose3D). Please refer to [VideoPose3D](https://github.com/facebookresearch/VideoPose3D) to set up the Human3.6M dataset (`./data` directory). We upload the training 2D cpn data [here](https://drive.google.com/file/d/131EnG8L0-A9DNy9bfsqCSrG1n5GnzwkO/view?usp=sharing) and the 3D gt data [here](https://drive.google.com/file/d/1nbscv_IlJ-sdug6GU2KWN4MYkPtYj4YX/view?usp=sharing).
#### Our updates
Some of the links above might not work, we provide the following for reproducing the results in our paper:
* Human3.6M: [CPN 2D](https://drive.google.com/file/d/1ayw5DI-CwD4XGtAu69bmbKVOteDFJhH5/view?usp=sharing), [Ground-truth 2D](https://drive.google.com/file/d/1U0Z85HBXutOXKMNOGks4I1ape8hZsAMl/view?usp=sharing), and [Ground-truth 3D](https://drive.google.com/file/d/13PgVNC-eDkEFoHDHooUGGmlVmOP-ri09/view?usp=sharing).
* HumanEva-I: [MRCNN 2D](https://drive.google.com/file/d/1IcO6NSp5O8mrjUTXadvfpvrKQRnhra88/view?usp=sharing), [Ground-truth 2D](https://drive.google.com/file/d/1UuW6iTdceNvhjEY2rFF9mzW93Fi1gMtz/view?usp=sharing), and [Ground-truth 3D](https://drive.google.com/file/d/1CtAJR_wTwfh4rEjQKKmABunkyQrvZ6tu/view?usp=sharing).
Above links are on Google Drive. You can also download all the above files via [BaiduYun](https://pan.baidu.com/s/1onNLKqrAbsc3mKRum5CAwA
code:1234).
Please put them in folder `./data` to reproduce the results.
### Evaluating pre-trained models
#### Human3.6M
We provide the pre-trained models using CPN and GT 2D data, which can be found in the `./checkpoint` directory. To evaluate, pleasae run:
For cpn model:
```bash
python run_s-agcn.py -tta -k cpn_ft_h36m_dbb --evaluate 96_cpn_ft_h36m_dbb_243_supervised.bin
```
For ground truth model:
```bash
python run_s-agcn.py -tta --evaluate 96_gt_243_supervised.bin
```
#### HumanEva-I
We provide the pre-trained MRCNN model [here](https://drive.google.com/file/d/1tRoDuygWSRWQsD9XuHCTHt13r0c5EZr6/view?usp=sharing) and ground truth model [here](https://drive.google.com/file/d/1IEqwcDtqQe70Vf3CilWARkrFE-gYRrkA/view?usp=sharing). To evaluate them, put them into the `./checkpoint` directory and run:
```bash
python run_s-agcn_HE_13.py -da -tta -d 'humaneva15' -k detectron_pt_coco -str 'Train/S1,Train/S2,Train/S3' -ste 'Validate/S1,Validate/S2,Validate/S3' -c 'checkpoint/humaneva' -a 'Walk,Jog,Box' -arc '3,3,3' -b 1024 --evaluate 96_detectron_pt_coco_27_supervised_epoch_990.bin --by-subject
```
```bash
python run_s-agcn.py -da -tta -d 'humaneva15' -str 'Train/S1,Train/S2,Train/S3' -ste 'Validate/S1,Validate/S2,Validate/S3' -c 'checkpoint/humaneva' -a 'Walk,Jog,Box' -arc '3,3,3' -b 1024 --evaluate 96_gt_27_supervised_epoch_819.bin --by-subject
```
#### MPI-INF-3DHP
We follow the experimental setup in [p-stmo](https://github.com/patrick-swk/p-stmo). To evaluate them, put the checkpioint at [Google Drive](https://drive.google.com/drive/folders/1RFkIpRNR-78hu3lXF_yTxoiM6GVO0Jx9?usp=sharing) into the `./checkpoint` directory and run:
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=22241 main_sagcn_gpus_individual_fc_3dhp.py --dataset '3dhp' --root_path data/s-agcn/ --batch_size 1200 --frames 81 --channel 256 --evaluate
```
### Training new models
To train a model from scratch, run:
```bash
python run_s-agcn.py -da -tta
```
`-da` controls the data augments during training and `-tta` is the testing data augmentation.
For example, to train our 243-frame ground truth model or CPN detections in our paper, please run:
```bash
python run_s-agcn.py -k gt -arc '3,3,3,3,3'
```
or
```bash
python run_s-agcn.py -k cpn_ft_h36m_dbb -arc '3,3,3,3,3'
```
It should require 48 hours to train on two GeForce RTX 3090 GPUs.
### Visualization and other functions
We keep our code consistent with [VideoPose3D](https://github.com/facebookresearch/VideoPose3D). Please refer to their project page for further information.
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
基于GLA-GCN的人体动作识别算法内含数据集以及预训练模型.zip (53个子文件)
基于GLA-GCN的人体动作识别算法内含数据集以及预训练模型
checkpoint
h36m
96_gt_243_supervised.bin 20.99MB
96_cpn_ft_h36m_dbb_243_supervised.bin 20.99MB
humaneva
96_detectron_pt_coco_27_supervised_epoch_990.bin 13.13MB
96_gt_27_supervised_epoch_819.bin 13.13MB
run_s-agcn_HE_13.py 33KB
run_s-agcn.py 31KB
common
utils.py 13KB
camera.py 3KB
opt.py 5KB
humaneva_dataset.py 4KB
generators.py 24KB
loss.py 6KB
arguments_HE_13.py 6KB
mocap_dataset.py 1KB
quaternion.py 1001B
generator_3dhp.py 9KB
utils_3dhp.py 8KB
skeleton.py 3KB
visualization.py 9KB
ranger.py 6KB
agcn.py 6KB
load_data_hm36.py 9KB
h36m_dataset.py 11KB
agcn_HE_13.py 6KB
generators_HE.py 24KB
s_agcn_HE_13.py 6KB
arguments.py 7KB
tools.py 8KB
__pycache__
h36m_dataset.cpython-38.pyc 6KB
generators.cpython-38.pyc 12KB
ranger.cpython-38.pyc 3KB
skeleton.cpython-38.pyc 2KB
loss.cpython-38.pyc 5KB
arguments.cpython-38.pyc 4KB
agcn_HE_gt.cpython-38.pyc 7KB
s_agcn.cpython-38.pyc 5KB
arguments_HE_gt.cpython-38.pyc 4KB
utils.cpython-38.pyc 1KB
quaternion.cpython-38.pyc 1KB
generators_HE.cpython-38.pyc 12KB
camera.cpython-38.pyc 3KB
humaneva_dataset.cpython-38.pyc 3KB
agcn.cpython-38.pyc 7KB
tools.cpython-38.pyc 6KB
s_agcn_HE_gt.cpython-38.pyc 5KB
mocap_dataset.cpython-38.pyc 2KB
load_data_3dhp_mae.py 9KB
s_agcn.py 6KB
figures
architecture.png 168KB
video.png 611KB
Purchases.55011271.gif 1.34MB
main_sagcn_gpus_individual_fc_3dhp.py 17KB
README.md 6KB
共 53 条
- 1
资源评论
- ltw159698179632024-05-02资源内容详细全面,与描述一致,对我很有用,有一定的使用价值。
小码蚁.
- 粉丝: 2511
- 资源: 3910
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 冯璐阳 42105650—祝福.docx
- 基于多种算法及改进算法实现的移动机器人路径规划matlab源码(含A星算法+PRM+RRT的改进等).zip
- 布里斯托尔纸细分市场、总体规模、先进性、市占率行业分析报告2024年.docx
- Obi绳子插件,好用的很 6.5.4版本
- openjfx-22.0.1-windows-x64-bin-sdk.zip
- 基于ros和stm32f1的小车代码(含串口通信)+项目说明.zip
- 人体姿态估计-基于Tensorflow实现的人体姿态估计算法-附项目源码-优质项目分享.zip
- java实现所有算法大全
- JDBC DAO模式 (复习)
- Proteus仿真AT89C51电子密码锁
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功