# PoseNet and TensorFlow.js
This is an example of using pre-trained models in the browser. For this particular example, this is a trained [MobileNet](https://arxiv.org/abs/1704.04861) model, an efficient CNN for mobile vision. PoseNet can detect human figures in images and videos using either a single-pose or multi-pose algorithm. For more details about this Machine Learning model, [refer to this blog post](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5) for a high-level description of PoseNet running on Tensorflow.js.
**See [demo here](https://jscriptcoder.github.io/tfjs-posenet/)**

**Notes:**
1. This code is based on [tfjs-models/posenet](https://github.com/tensorflow/tfjs-models/tree/master/posenet) model released by TensorFlow team. I borrowed, adapted and turned it into a React component.
2. Keep in mind I just tested it in Chrome. Honeslty, I don't care about other browsers for this kind of experiments.
3. You must allow, for obvious reasons, the use of your webcam. Don't worry, the images stay in your browser. Let's say it's GDPR compliance :stuck_out_tongue_winking_eye:
## PoseNet React Component
```jsx
import * as React from 'react'
import ReactDOM from 'react-dom'
import PoseNet from './PoseNet'
ReactDOM.render(
<PoseNet
{/* Default value: 600 */}
videoWidth={ 600 }
{/* Default value: 500 */}
videoHeight={ 500 }
{/*
If the poses should be flipped/mirrored horizontally.
This should be set to true for videos where the video
is by default flipped horizontally (i.e. a webcam), and
you want the poses to be returned in the proper orientation.
Default value: false
*/}
flipHorizontal={ false }
{/*
There are two possible values: 'single-pose' | 'multi-pose'.
Default value: 'single-pose'
*/}
algorithm={ 'single-pose' }
{/*
Loads the PoseNet model weights for either the 0.50, 0.75,
1.00, or 1.01 version. They vary in size and accuracy. 1.01
is the largest, but will be the slowest. 0.50 is the fastest,
but least accurate.
Default value: 1.01
*/}
mobileNetArchitecture={ 1.01 }
{/* Default value: true */}
showVideo={ true }
{/* Default value: true */}
showSkeleton={ true }
{/* Default value: true */}
showPoints={ true }
{/*
The overall confidence in the estimation of a person's
pose (i.e. a person detected in a frame)
Default value: 0.1
*/}
minPoseConfidence={ 0.1 }
{/*
The confidence that a particular estimated keypoint
position is accurate (i.e. the elbow's position)
Default value: 0.5
*/}
minPartConfidence={ 0.5 }
{/*
The maximum number of poses to detect.
Default value: 2
*/}
maxPoseDetections={ 2 }
{/*
Non-maximum suppression part distance. It needs to be strictly positive.
Two parts suppress each other if they are less than nmsRadius pixels away.
Defaults value: 20
*/}
nmsRadius={ 20.0 }
{/*
Must be 32, 16, or 8. This parameter affects the height and width
of the layers in the neural network. At a high level, it affects
the accuracy and speed of the pose estimation. The lower the value
of the output stride the higher the accuracy but slower the speed,
the higher the value the faster the speed but lower the accuracy
*/}
outputStride={ 16 }
{/*
Values between 0.2 and 1. Scales down the image and increase
the speed when feeding through the network at the cost of accuracy.
Default value: 0.5
*/}
imageScaleFactor={ 0.5 }
{/* Default value: 'aqua' */}
skeletonColor={ 'aqua' }
{/* Default value: 2 */}
skeletonLineWidth={ 2 }
{/* Default value: 'Loading pose detector...' */}
loadingText={ 'Loading pose detector...' }
/>,
document.getElementById('app')
)
```
## Installing and running example
```
$ npm install
$ npm run example
```
Browser will open http://localhost:8080/. Have fun :wink:
## How to consume the component
```
$ npm install jscriptcoder/tfjs-posenet
```
```js
import * as React from 'react'
import PoseNet from 'tfjs-posenet'
const MyContainer = (props) => (
<div>
<h3>This is my container<h3>
<PoseNet
videoWidth={props.width}
videoHeight={props.height}
skeletonColor={props.color} />
</div>
)
```
没有合适的资源?快使用搜索试试~ 我知道了~
tfjs-posenet:使用预先训练的TensorFlow.js模型PoseNet进行实时姿态检测

共16个文件
js:3个
json:2个
html:2个

需积分: 50 7 下载量 190 浏览量
2021-05-13
02:57:44
上传
评论
收藏 9.41MB ZIP 举报
温馨提示
PoseNet和TensorFlow.js 这是在浏览器中使用预训练模型的示例。 对于此特定示例,这是训练有素的模型,这是一种用于移动视觉的有效CNN。 PoseNet可以使用单姿势或多姿势算法检测图像和视频中的人物。 有关此机器学习模型的更多详细信息, ,以获取在Tensorflow.js上运行的PoseNet的高级描述。 查看 笔记: 该代码基于TensorFlow团队发布的模型。 我借用,改编并将其变成一个React组件。 请记住,我刚刚在Chrome中对其进行了测试。 不好意思,我不在乎其他浏览器进行此类实验。 出于明显的原因,您必须允许使用网络摄像头。 不用担心,图像会保留在您的浏览器中。 假设这是GDPR的合规性 :winking_face_with_tongue: PoseNet React组件 import * as React from 'react' import ReactDOM from 'react-
资源详情
资源评论
资源推荐
收起资源包目录






















共 16 条
- 1

























weixin_42135073
- 粉丝: 35
- 资源: 4783
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


最新资源
- 嵌入式开发_ARM_入门_STM32迁移学习_1741139876.zip
- 嵌入式系统_STM32_自定义Bootloader_教程_1741142157.zip
- 文章上所说的串口助手,工程文件
- 斑马打印机zpl官方指令集
- 《实验二 面向对象编程》
- 《JavaScript项目式实例教程》项目五多窗体注册页面窗口对象.ppt
- Web前端开发中Vue.js组件化的应用详解
- labelme已打包EXE文件
- 一文读懂Redis之单机模式搭建
- Vue综合案例:组件化开发
- 《SolidWorks建模实例教程》第6章工程图及实例详解.ppt
- C语言基础试题.pdf
- Go语言、数据库、缓存与分布式系统核心技术要点及面试问答详解
- 7天精通DeepSeek实操手册.pdf
- DeepSeek R1 Distill 全版本安全评估.pdf
- DeepSeek 零基础入门手册.pdf
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈



安全验证
文档复制为VIP权益,开通VIP直接复制

评论0