# Deepstream-YOLO-Pose
<div style="text-align: center;">
<figure>
<img src="imgs/Multistream_4_YOLOv8s-pose-3.PNG" alt="Multistream_4_YOLOv8s-pose-3.PNG" width="600">
<figcaption> <br> YOLO-Pose accelerated with TensorRT and multi-streaming with Deepstream SDK </figcaption>
</figure>
</div>
---
[![Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fatrox%2Fsync-dotenv%2Fbadge&style=flat)](https://github.com/triple-Mu/YOLOv8-TensorRT)
[![Python Version](https://img.shields.io/badge/Python-3.8--3.10-FFD43B?logo=python)](https://github.com/triple-Mu/YOLOv8-TensorRT)
[![img](https://badgen.net/badge/icon/tensorrt?icon=azurepipelines&label)](https://developer.nvidia.com/tensorrt)
[![img](https://badgen.net/github/prs/YunghuiHsu/deepstream-yolo-pose)](https://github.com/YunghuiHsu/deepstream-yolo-pose/pulls)
[![img](https://img.shields.io/github/stars/YunghuiHsu/deepstream-yolo-pose?color=ccf)](https://github.com/YunghuiHsu/deepstream-yolo-pose)
---
# System Requirements
- Python 3.8
- Should be already installed with Ubuntu 20.04
- Ubuntu 20.04
- CUDA 11.4 (Jetson)
- TensorRT 8+
### DeepStream 6.x on x86 platform
* [Ubuntu 20.04](https://releases.ubuntu.com/20.04/)
* [CUDA 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)
* [TensorRT 8.5 GA Update 1 (8.5.2.2)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)
* [NVIDIA Driver 525.85.12 (Data center / Tesla series) / 525.105.17 (TITAN, GeForce RTX / GTX series and RTX / Quadro series)](https://www.nvidia.com.br/Download/index.aspx)
* [NVIDIA DeepStream SDK 6.x](https://developer.nvidia.com/deepstream-getting-started)
* [GStreamer 1.16.3](https://gstreamer.freedesktop.org/)
* [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
### DeepStream 6.x on Jetson platform
- [JetPack 5.1.1 / 5.1](https://developer.nvidia.com/embedded/jetpack)
- [NVIDIA DeepStream SDK](https://developer.nvidia.com/deepstream-sdk)
- Download and install from https://developer.nvidia.com/deepstream-download
- [DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo)
## Deepstream Python Biding
- [Deepstream Python Biding](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/bindings)
## Gst-python and GstRtspServer
- Installing GstRtspServer and introspection typelib
```
sudo apt update
sudo apt install python3-gi python3-dev python3-gst-1.0 -y
sudo apt-get install libgstrtspserver-1.0-0 gstreamer1.0-rtsp
```
For gst-rtsp-server (and other GStreamer stuff) to be accessible in
Python through gi.require_version(), it needs to be built with
gobject-introspection enabled (libgstrtspserver-1.0-0 is already).
Yet, we need to install the introspection typelib package:
```
sudo apt-get install libgirepository1.0-dev
sudo apt-get install gobject-introspection gir1.2-gst-rtsp-server-1.0
```
---
# Prepare YOLO-Pose Model
<div style="text-align: center;">
<figure>
<img src="imgs/YOLO-pose_architecture_based_on_YOLOv5.PNG" alt="netron_yolov8s-pose_dy_onnx.PNG" width="600">
<figcaption> <br>YOLO-pose architecture <br> </figcaption>
</figure>
</div>
source : [YOLO-Pose: Enhancing YOLO for Multi Person Pose Estimation Using Object Keypoint Similarity Loss](https://arxiv.org/abs/2204.06806)
- [ ] [YOLOv7](https://github.com/WongKinYiu/yolov7)
- [Gwencong/yolov7-pose-tensorrt](https://github.com/Gwencong/yolov7-pose-tensorrt)
- [ nanmi/yolov7-pose](https://github.com/nanmi/yolov7-pose)
- support [single batch only](https://github.com/nanmi/yolov7-pose/issues/20)
- Some problems with `/YoloLayer_TRT_v7.0/build/libyolo.so`
- The detection box is not synchronized with the screen on Jetson
- [x] [YOLOv8](https://github.com/ultralytics/ultralytics)
## Prepare [YOLOv8](https://github.com/ultralytics/ultralytics) TensorRT Engine
- Choose yolov8-pose for better operator optimization of ONNX model
- Base on [triple-Mu/YOLOv8-TensorRT/Pose.md](https://github.com/triple-Mu/YOLOv8-TensorRT/blob/main/docs/Pose.md)
- The yolov8-pose model conversion route is : YOLOv8 PyTorch model -> ONNX -> TensorRT Engine
***Notice !!! :warning:*** This repository don't support TensorRT API building !!!
### 0. Get `yolov8s-pose.pt`
https://github.com/ultralytics/ultralytics
</details>
<details><summary>Benchmark of YOLOv8-Pose</summary>
See [Pose Docs](https://docs.ultralytics.com/tasks/pose) for usage examples with these models.
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 |
| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | 60.0 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 |
| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-pose.pt) | 640 | 67.6 | 90.0 | 784.5 | 2.59 | 44.4 | 168.6 |
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO Keypoints val2017](http://cocodataset.org)
dataset.
<br>Reproduce by `yolo val pose data=coco-pose.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
<br>Reproduce by `yolo val pose data=coco8-pose.yaml batch=1 device=0|cpu`
- Source : [ultralytics](https://github.com/ultralytics/ultralytics)
</details>
```
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt
```
### 1. Pytorch Model to Onnx Model
- Export Orin ONNX model by ultralytics
You can leave this repo and use the original `ultralytics` repo for onnx export.
- CLI tools(`yolo` command from "ultralytics.com")
- Recommended in your server to get faster speed :zap:
- ref : [ultralytics.com/modes/export](https://docs.ultralytics.com/modes/export/#arguments)
- Usage(after `pip3 install ultralytics`):
```shell
yolo export model=yolov8s-pose.pt format=onnx d
没有合适的资源?快使用搜索试试~ 我知道了~
yolov8系列--Use Deepstream python API to extract the model o.zip
共18个文件
py:7个
txt:4个
png:4个
需积分: 5 0 下载量 199 浏览量
2024-02-24
21:46:38
上传
评论
收藏 2.08MB ZIP 举报
温馨提示
yolov8系列--Use Deepstream python API to extract the model o
资源推荐
资源详情
资源评论
收起资源包目录
yolov8系列--Use Deepstream python API to extract the model o.zip (18个子文件)
kwan1120
LICENSE 11KB
configs
dstest1_pgie_YOLOv7-Pose-YOLOLAYER_config.txt 4KB
config_nvdsanalytics.txt 4KB
config_tracker.txt 2KB
dstest1_pgie_YOLOv8-Pose_config.txt 4KB
utils
utils.py 21KB
is_aarch_64.py 996B
display.py 8KB
FPS.py 2KB
bus_call.py 1KB
deepstream_YOLOv7-Pose_YoloLayer.py 22KB
deepstream_YOLOv8-Pose_rtsp.py 22KB
.gitignore 154B
imgs
netron_yolov8s-pose_dy-sim-640_onnx.PNG 19KB
netron_yolov8s-pose_dy_onnx.PNG 20KB
YOLO-pose_architecture_based_on_YOLOv5.PNG 268KB
Multistream_4_YOLOv8s-pose-3.PNG 1.75MB
README.md 15KB
共 18 条
- 1
资源评论
Kwan的解忧杂货铺
- 粉丝: 1w+
- 资源: 3682
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功