# trt_pose
> Want to detect hand poses? Check out the new [trt_pose_hand](http://github.com/NVIDIA-AI-IOT/trt_pose_hand) project for real-time hand pose and gesture recognition!
<img src="https://user-images.githubusercontent.com/4212806/67125332-71a64580-f1a9-11e9-8ee1-e759a38de215.gif" height=256/>
trt_pose is aimed at enabling real-time pose estimation on NVIDIA Jetson. You may find it useful for other NVIDIA platforms as well. Currently the project includes
- Pre-trained models for human pose estimation capable of running in real time on Jetson Nano. This makes it easy to detect features like ``left_eye``, ``left_elbow``, ``right_ankle``, etc.
- Training scripts to train on any keypoint task data in [MSCOCO](https://cocodataset.org/#home) format. This means you can experiment with training trt_pose for keypoint detection tasks other than human pose.
To get started, follow the instructions below. If you run into any issues please [let us know](../../issues).
## Getting Started
To get started with trt_pose, follow these steps.
### Step 1 - Install Dependencies
1. Install PyTorch and Torchvision. To do this on NVIDIA Jetson, we recommend following [this guide](https://forums.developer.nvidia.com/t/72048)
2. Install [torch2trt](https://github.com/NVIDIA-AI-IOT/torch2trt)
```python
git clone https://github.com/NVIDIA-AI-IOT/torch2trt
cd torch2trt
sudo python3 setup.py install --plugins
```
3. Install other miscellaneous packages
```python
sudo pip3 install tqdm cython pycocotools
sudo apt-get install python3-matplotlib
```
### Step 2 - Install trt_pose
```python
git clone https://github.com/NVIDIA-AI-IOT/trt_pose
cd trt_pose
sudo python3 setup.py install
```
### Step 3 - Run the example notebook
We provide a couple of human pose estimation models pre-trained on the MSCOCO dataset. The throughput in FPS is shown for each platform
| Model | Jetson Nano | Jetson Xavier | Weights |
|-------|-------------|---------------|---------|
| resnet18_baseline_att_224x224_A | 22 | 251 | [download (81MB)](https://drive.google.com/open?id=1XYDdCUdiF2xxx4rznmLb62SdOUZuoNbd) |
| densenet121_baseline_att_256x256_B | 12 | 101 | [download (84MB)](https://drive.google.com/open?id=13FkJkx7evQ1WwP54UmdiDXWyFMY1OxDU) |
To run the live Jupyter Notebook demo on real-time camera input, follow these steps
1. Download the model weights using the link in the above table.
2. Place the downloaded weights in the [tasks/human_pose](tasks/human_pose) directory
3. Open and follow the [live_demo.ipynb](tasks/human_pose/live_demo.ipynb) notebook
> You may need to modify the notebook, depending on which model you use
## See also
- [trt_pose_hand](http://github.com/NVIDIA-AI-IOT/trt_pose_hand) - Real-time hand pose estimation based on trt_pose
- [torch2trt](http://github.com/NVIDIA-AI-IOT/torch2trt) - An easy to use PyTorch to TensorRT converter
- [JetBot](http://github.com/NVIDIA-AI-IOT/jetbot) - An educational AI robot based on NVIDIA Jetson Nano
- [JetRacer](http://github.com/NVIDIA-AI-IOT/jetracer) - An educational AI racecar using NVIDIA Jetson Nano
- [JetCam](http://github.com/NVIDIA-AI-IOT/jetcam) - An easy to use Python camera interface for NVIDIA Jetson
## References
The trt_pose model architectures listed above are inspired by the following works, but are not a direct replica. Please review the open-source code and configuration files in this repository for architecture details. If you have any questions feel free to reach out.
* _Cao, Zhe, et al. "Realtime multi-person 2d pose estimation using part affinity fields." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017._
* _Xiao, Bin, Haiping Wu, and Yichen Wei. "Simple baselines for human pose estimation and tracking." Proceedings of the European Conference on Computer Vision (ECCV). 2018._
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
TensorRT_使用TensorRT加速实现实时人体姿态估计算法_算法加速_优质项目部署实战.zip (57个子文件)
TensorRT_使用TensorRT加速实现实时人体姿态估计算法_算法加速_优质项目部署实战
CMakeLists.txt 368B
tasks
human_pose
download_coco.sh 182B
human_pose.json 510B
eval.ipynb 12KB
experiments
resnet50_baseline_att_368x368_A.json 2KB
resnet18_baseline_att_256x256_A.json 2KB
densenet121_baseline_att_320x320_A.json 2KB
resnet18_baseline_att_368x368_A.json 2KB
resnet18_baseline_att_224x224_keepAR.json 2KB
resnet18_baseline_att_224x224_B.json 2KB
resnet18_baseline_att_224x224_A.json 2KB
densenet121_baseline_att_256x256_B.json 2KB
densenet169_baseline_att_256x256_A.json 2KB
densenet121_baseline_att_224x224_A.json 2KB
densenet169_baseline_att_368x368_A.json 2KB
densenet121_baseline_att_224x224_B.json 2KB
resnet50_baseline_att_384x384_A.json 2KB
dla34up_pose_256x256_A.json 2KB
mnasnet0_5_baseline_att_224x224_keepAR.json 2KB
resnet50_baseline_att_256x256_A.json 2KB
densenet121_baseline_att_256x256_A.json 2KB
preprocess_coco_person.py 2KB
live_demo.ipynb 10KB
setup.py 926B
trt_pose
__init__.py 12B
train
generate_cmap.cpp 2KB
generate_paf.cpp 3KB
generate_paf.hpp 319B
generate_cmap.hpp 281B
plugins.cpp 8KB
coco.py 16KB
draw_objects.py 1KB
utils
__init__.py 0B
export_for_isaac.py 8KB
parse
find_peaks.cpp 2KB
connect_parts.hpp 1017B
utils
PairGraph.hpp 2KB
CoverTable.hpp 1KB
refine_peaks.hpp 1KB
connect_parts.cpp 3KB
refine_peaks.cpp 3KB
paf_score_graph.cpp 4KB
paf_score_graph.hpp 1KB
munkres.cpp 7KB
munkres.hpp 1KB
find_peaks.hpp 1KB
test_all.cpp 3KB
README.md 3KB
parse_objects.py 1KB
models
__init__.py 1KB
dla.py 1KB
common.py 3KB
densenet.py 3KB
resnet.py 4KB
mnasnet.py 3KB
train.py 6KB
README.md 4KB
共 57 条
- 1
资源评论
极智视界
- 粉丝: 2w+
- 资源: 1419
下载权益
C知道特权
VIP文章
课程特权
开通VIP
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功