<div align="center">
<img src="docs/logo.jpg", width="400">
</div>
## News!
- Aug 2020: [**v0.4.0** version](https://github.com/MVIG-SJTU/AlphaPose) of AlphaPose is released! Stronger tracking! Include whole body(face,hand,foot) keypoints! [Colab](https://colab.research.google.com/drive/14Zgotr2_F0LfvcpRi03uQdMvUbLQSgok?usp=sharing) now available.
- Dec 2019: [**v0.3.0** version](https://github.com/MVIG-SJTU/AlphaPose) of AlphaPose is released! Smaller model, higher accuracy!
- Apr 2019: [**MXNet** version](https://github.com/MVIG-SJTU/AlphaPose/tree/mxnet) of AlphaPose is released! It runs at **23 fps** on COCO validation set.
- Feb 2019: [CrowdPose](https://github.com/MVIG-SJTU/AlphaPose/docs/CrowdPose.md) is integrated into AlphaPose Now!
- Dec 2018: [General version](https://github.com/MVIG-SJTU/AlphaPose/PoseFlow) of PoseFlow is released! 3X Faster and support pose tracking results visualization!
- Sep 2018: [**v0.2.0** version](https://github.com/MVIG-SJTU/AlphaPose/tree/pytorch) of AlphaPose is released! It runs at **20 fps** on COCO validation set (4.6 people per image on average) and achieves 71 mAP!
## AlphaPose
[AlphaPose](http://www.mvig.org/research/alphapose.html) is an accurate multi-person pose estimator, which is the **first open-source system that achieves 70+ mAP (75 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset.**
To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the **first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset.**
AlphaPose supports both Linux and **Windows!**
<div align="center">
<img src="docs/alphapose_17.gif", width="400" alt><br>
COCO 17 keypoints
</div>
<div align="center">
<img src="docs/alphapose_26.gif", width="400" alt><br>
<b><a href="https://github.com/Fang-Haoshu/Halpe-FullBody">Halpe 26 keypoints</a></b> + tracking
</div>
<div align="center">
<img src="docs/alphapose_136.gif", width="400"alt><br>
<b><a href="https://github.com/Fang-Haoshu/Halpe-FullBody">Halpe 136 keypoints</a></b> + tracking
</div>
## Results
### Pose Estimation
Results on COCO test-dev 2015:
<center>
| Method | AP @0.5:0.95 | AP @0.5 | AP @0.75 | AP medium | AP large |
|:-------|:-----:|:-------:|:-------:|:-------:|:-------:|
| OpenPose (CMU-Pose) | 61.8 | 84.9 | 67.5 | 57.1 | 68.2 |
| Detectron (Mask R-CNN) | 67.0 | 88.0 | 73.1 | 62.2 | 75.6 |
| **AlphaPose** | **73.3** | **89.2** | **79.1** | **69.0** | **78.6** |
</center>
Results on MPII full test set:
<center>
| Method | Head | Shoulder | Elbow | Wrist | Hip | Knee | Ankle | Ave |
|:-------|:-----:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| OpenPose (CMU-Pose) | 91.2 | 87.6 | 77.7 | 66.8 | 75.4 | 68.9 | 61.7 | 75.6 |
| Newell & Deng | **92.1** | 89.3 | 78.9 | 69.8 | 76.2 | 71.6 | 64.7 | 77.5 |
| **AlphaPose** | 91.3 | **90.5** | **84.0** | **76.4** | **80.3** | **79.9** | **72.4** | **82.1** |
</center>
More results and models are available in the [docs/MODEL_ZOO.md](docs/MODEL_ZOO.md).
### Pose Tracking
<p align='center'>
<img src="docs/posetrack.gif", width="360">
<img src="docs/posetrack2.gif", width="344">
</p>
Please read [trackers/README.md](trackers/) for details.
### CrowdPose
<p align='center'>
<img src="docs/crowdpose.gif", width="360">
</p>
Please read [docs/CrowdPose.md](docs/CrowdPose.md) for details.
## Installation
Please check out [docs/INSTALL.md](docs/INSTALL.md)
## Model Zoo
Please check out [docs/MODEL_ZOO.md](docs/MODEL_ZOO.md)
## Quick Start
- **Colab**: We provide a [colab example](https://colab.research.google.com/drive/14Zgotr2_F0LfvcpRi03uQdMvUbLQSgok?usp=sharing) for your quick start.
- **Inference**: Inference demo
``` bash
./scripts/inference.sh ${CONFIG} ${CHECKPOINT} ${VIDEO_NAME} # ${OUTPUT_DIR}, optional
```
- **Training**: Train from scratch
``` bash
./scripts/train.sh ${CONFIG} ${EXP_ID}
```
- **Validation**: Validate your model on MSCOCO val2017
``` bash
./scripts/validate.sh ${CONFIG} ${CHECKPOINT}
```
Examples:
Demo using `FastPose` model.
``` bash
./scripts/inference.sh configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml pretrained_models/fast_res50_256x192.pth ${VIDEO_NAME}
#or
python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/
```
Train `FastPose` on mscoco dataset.
``` bash
./scripts/train.sh ./configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml exp_fastpose
```
More detailed inference options and examples, please refer to [GETTING_STARTED.md](docs/GETTING_STARTED.md)
## Common issue & FAQ
Check out [faq.md](docs/faq.md) for faq. If it can not solve your problems or if you find any bugs, don't hesitate to comment on GitHub or make a pull request!
## Contributors
AlphaPose is based on RMPE(ICCV'17), authored by [Hao-Shu Fang](https://fang-haoshu.github.io/), Shuqin Xie, [Yu-Wing Tai](https://scholar.google.com/citations?user=nFhLmFkAAAAJ&hl=en) and [Cewu Lu](http://www.mvig.org/), [Cewu Lu](http://mvig.sjtu.edu.cn/) is the corresponding author. Currently, it is maintained by [Jiefeng Li\*](http://jeff-leaf.site/), [Hao-shu Fang\*](https://fang-haoshu.github.io/), [Yuliang Xiu](http://xiuyuliang.cn/about/) and [Chao Xu](http://www.isdas.cn/).
The main contributors are listed in [doc/contributors.md](docs/contributors.md).
## TODO
- [x] Multi-GPU/CPU inference
- [ ] 3D pose
- [x] add tracking flag
- [ ] PyTorch C++ version
- [ ] Add MPII and AIC data
- [ ] dense support
- [x] small box easy filter
- [x] Crowdpose support
- [ ] Speed up PoseFlow
- [ ] Add stronger/light detectors and the [mobile pose](https://github.com/YuliangXiu/MobilePose-pytorch)
- [x] High level API
We would really appreciate if you can offer any help and be the [contributor](docs/contributors.md) of AlphaPose.
## Citation
Please cite these papers in your publications if it helps your research:
@inproceedings{fang2017rmpe,
title={{RMPE}: Regional Multi-person Pose Estimation},
author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},
booktitle={ICCV},
year={2017}
}
@article{li2018crowdpose,
title={CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark},
author={Li, Jiefeng and Wang, Can and Zhu, Hao and Mao, Yihuan and Fang, Hao-Shu and Lu, Cewu},
journal={arXiv preprint arXiv:1812.00324},
year={2018}
}
@inproceedings{xiu2018poseflow,
author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
title = {{Pose Flow}: Efficient Online Pose Tracking},
booktitle={BMVC},
year = {2018}
}
## License
AlphaPose is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at mvig.alphapose[at]gmail[dot]com and cc lucewu[[at]sjtu[dot]edu[dot]cn. We will send the detail agreement to you.
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
智慧教室基于深度学习实现课堂专注度分析及考试作弊检测系统python源码+模型.zip已获导师指导并高分通过的毕业设计项目,也可作为课程设计和期末大作业,代码完整下载可用。 关键点计算方法、侧面的传递物品识别、逻辑回归关键点 课堂专注度及考试作弊系统 课堂动态点名,情绪识别、表情识别和人脸识别结合 python版本 【备注】主要针对正在做毕设的同学和需要项目实战的深度学习cv图像识别模式识别方向学习者。 也可作为课程设计、期末大作业。包含:项目源码、训练好的模型、项目操作说明等,该项目可直接作为毕设使用。 也可以用来学习、参考、借鉴。如果基础不错,在此代码上做修改,训练其他模型。 智慧教室基于深度学习实现课堂专注度分析及考试作弊检测系统python源码+模型.zip已获导师指导并高分通过的毕业设计项目,也可作为课程设计和期末大作业,代码完整下载可用。智慧教室基于深度学习实现课堂专注度分析及考试作弊检测系统python源码+模型.zip已获导师指导并高分通过的毕业设计项目,也可作为课程设计和期末大作业,代码完整下载可用。智慧教室基于深度学习实现课堂专注度分析及考试作弊检测系统pyt
资源推荐
资源详情
资源评论
收起资源包目录
智慧教室基于深度学习实现课堂专注度分析及考试作弊检测系统python源码+模型.zip (626个子文件)
psroi_pooling_cuda.c 3KB
psroi_pooling_cuda.c 3KB
Widerface-RetinaFace.caffemodel 1.78MB
yolov3-spp.cfg 9KB
yolov3-spp.cfg 9KB
yolov3.cfg 9KB
yolov3.cfg 9KB
yolov3.cfg 8KB
yolov3.cfg 8KB
yolo-voc.cfg 3KB
yolo-voc.cfg 3KB
yolo.cfg 3KB
yolo.cfg 3KB
tiny-yolo-voc.cfg 1KB
tiny-yolo-voc.cfg 1KB
setup.cfg 66B
setup.cfg 66B
soft_nms_cpu.cpp 342KB
soft_nms_cpu.cpp 342KB
deform_conv_cuda.cpp 29KB
deform_conv_cuda.cpp 29KB
deform_pool_cuda.cpp 4KB
deform_pool_cuda.cpp 4KB
roi_align_cuda.cpp 3KB
roi_align_cuda.cpp 3KB
nms_cpu.cpp 2KB
nms_cpu.cpp 2KB
nms_cuda.cpp 575B
nms_cuda.cpp 575B
deform_conv_cuda_kernel.cu 41KB
deform_conv_cuda_kernel.cu 41KB
deform_pool_cuda_kernel.cu 16KB
deform_pool_cuda_kernel.cu 16KB
roi_align_kernel.cu 11KB
roi_align_kernel.cu 11KB
psroi_pooling_kernel.cu 8KB
psroi_pooling_kernel.cu 8KB
nms_kernel.cu 5KB
nms_kernel.cu 5KB
nms_kernel.cu 5KB
alphapose_17.gif 7.93MB
alphapose_17.gif 7.93MB
alphapose_26.gif 6.74MB
alphapose_26.gif 6.74MB
alphapose_136.gif 6.7MB
alphapose_136.gif 6.7MB
posetrack1.gif 3.9MB
posetrack.gif 3.9MB
posetrack1.gif 3.9MB
posetrack.gif 3.9MB
posetrack2.gif 3.13MB
posetrack2.gif 3.13MB
posetrack2.gif 3.13MB
posetrack2.gif 3.13MB
pose.gif 2.14MB
pose.gif 2.14MB
crowdpose.gif 1.61MB
crowdpose.gif 1.61MB
demo.gif 1.37MB
demo.gif 997KB
demo1.gif 930KB
.gitignore 1KB
.gitignore 1KB
.gitignore 1KB
.gitignore 236B
.gitignore 117B
.gitignore 26B
.gitignore 5B
psroi_pooling_kernel.h 835B
psroi_pooling_kernel.h 835B
psroi_pooling_cuda.h 489B
psroi_pooling_cuda.h 489B
gpu_nms.hpp 146B
smart_classroom.iml 384B
静默活体APK.jpeg 26KB
logo.jpg 438KB
logo.jpg 438KB
step4.jpg 352KB
step4.jpg 352KB
1.jpg 193KB
1.jpg 193KB
2.jpg 148KB
2.jpg 148KB
worlds-largest-selfie.jpg 136KB
logo.jpg 128KB
step3.jpg 102KB
step3.jpg 102KB
image_F2_result.jpg 85KB
image_F2.jpg 84KB
image_T1_result.jpg 73KB
image_T1.jpg 72KB
step2.jpg 70KB
step2.jpg 70KB
image_F1_result.jpg 65KB
image_F1.jpg 63KB
3.jpg 41KB
3.jpg 41KB
step1.jpg 41KB
step1.jpg 41KB
framework.jpg 35KB
共 626 条
- 1
- 2
- 3
- 4
- 5
- 6
- 7
资源评论
- 帅比5972023-12-23实在是宝藏资源、宝藏分享者!感谢大佬~
不安分的小女孩
- 粉丝: 9270
- 资源: 2015
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 基于matlab实现串口发送接收数据 可配置端口,波特率等 发送可选择ASCII方式或HEX方式
- matlab基于BP神经网络手写字母识别(单一).zip代码9
- 基于matlab实现编写的串口调试工具,数据接收部分采用中断方式,保证了实时的数据显示
- 基于matlab实现39节点电力系统合闸角调控过程中的机组和负荷的灵敏度计算.rar
- HBase数据库性能调优
- 原生微信小程序源码 - -首字母排序选择
- 基于QT+C++开发的保卫萝卜塔防游戏+源码(毕业设计&课程设计&项目开发)
- newapp.apk
- 项目申报管理系统论文Java项目
- 8数码、α-β搜索的博弈树算法编写一字棋游戏、Fisher线性分类器、感知器算法、SVM 分类器、卷积神经网络 CNN 框架
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功