# TensorRT OpenPifPaf Pose Estimation
TensorRT OpenPifPaf Pose Estimation is a Jetson-friendly application that runs inference using a [TensorRT](https://developer.nvidia.com/tensorrt) engine to extract human poses. The provided TensorRT engine is generated from an ONNX model exported from [OpenPifPaf](https://github.com/vita-epfl/openpifpaf) version 0.12a4 using [ONNX-TensorRT](https://github.com/onnx/onnx-tensorrt) repo.
## Getting Started
The following instructions will help you get started.
### Prerequisites
**Hardware**
* [NVIDIA Jetson TX2](https://developer.nvidia.com/embedded/jetson-tx2)
* [NVIDIA Jetson Nano](https://developer.nvidia.com/embedded/jetson-nano)
**Software**
* You should have [Docker](https://docs.docker.com/get-docker/) on your device.
### Install
```bash
git clone https://github.com/galliot-us/pose-estimation.git
cd pose-estimation/
```
### Usage
##### Run on Jetson
* You need to have [JetPack 4.4](https://developer.nvidia.com/jetpack-43-archive) installed on your Jetson device to run this Pose Estimation application.
* This application gets an image or a video as input in 321x193 or 641x369 resolutions. The two ONNX models with these input sizes of OpenPifPaf version 0.12a4 (`openpifpaf_resnet50_321_193.onnx` and `openpifpaf_resnet50_641_369.onnx`) are provided in [Galliot-Models](https://github.com/galliot-us/models/tree/master/ONNX/openpifpaf_12a4).
* The ONNX model will be downloaded based on the specifications in the Config file and the TensorRT Engine will be generated from the ONNX model automatically through the application with performing the following steps.
* Note that you need to have installed nvidia-container-runtime and set docker daemon default runtime to nvidia in `/etc/docker/daemon.json` to have GPU access during docker build
```bash
# 1) Download/Copy Sample input
./download_sample_video.sh
# 3) Build Docker image for Jetson (This step is optional, you can skip it if you want to pull the container from neuralet dockerhub)
docker build -f jetson-4-4-openpifpaf.Dockerfile -t "galliot/pose-estimation-openpifpaf:latest-jetson-4-4" .
# 4) Run Docker container:
docker run --runtime nvidia --privileged -it -v $PWD:/repo galliot/pose-estimation-openpifpaf:latest-jetson-4-4
```
TensorRT-使用TensorRT部署PiPaf形体姿态估计算法-优质算法部署项目实战.zip
版权申诉
197 浏览量
2024-02-29
16:47:21
上传
评论
收藏 15KB ZIP 举报
极智视界
- 粉丝: 2w+
- 资源: 1459
最新资源
- 欧阳雨彤202330813009.py
- 基于 Yolov5 的自动贴标IMG,以及许多其他有用的工具
- 基于STM32F103C8T6、LCD1602、AD5206(I2C接口)6路数字电位器的proteus仿真应用设计
- 021315100-2405220913.awb
- 语音分帧与加窗基于MATLAB
- 二层独栋别墅砖混结构D027-两层-10.40&10.30米-施工图.dwg
- 帆软跑马灯制作,附件有制作好的效果
- 本户型为2层独栋别墅D026-两层-13.14&12.84米-施工图.dwg
- 双层别墅图纸有施工图D022-两层-08.70&10.80米-施工图.dwg
- 基于Android的交通事故全责图解设计源码
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈