# Gesture Recognition Magic Wand Training Scripts
## Introduction
The scripts in this directory can be used to train a TensorFlow model that
classifies gestures based on accelerometer data. The code uses Python 3.7 and
TensorFlow 2.0. The resulting model is less than 20KB in size.
The following document contains instructions on using the scripts to train a
model, and capturing your own training data.
This project was inspired by the [Gesture Recognition Magic Wand](https://github.com/jewang/gesture-demo)
project by Jennifer Wang.
## Training
### Dataset
Three magic gestures were chosen, and data were collected from 7
different people. Some random long movement sequences were collected and divided
into shorter pieces, which made up "negative" data along with some other
automatically generated random data.
The dataset can be downloaded from the following URL:
[download.tensorflow.org/models/tflite/magic_wand/data.tar.gz](http://download.tensorflow.org/models/tflite/magic_wand/data.tar.gz)
### Training in Colab
The following [Google Colaboratory](https://colab.research.google.com)
notebook demonstrates how to train the model. It's the easiest way to get
started:
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
If you'd prefer to run the scripts locally, use the following instructions.
### Running the scripts
Use the following command to install the required dependencies:
```shell
pip install -r requirements.txt
```
There are two ways to train the model:
- Random data split, which mixes different people's data together and randomly
splits them into training, validation, and test sets
- Person data split, which splits the data by person
#### Random data split
Using a random split results in higher training accuracy than a person split,
but inferior performance on new data.
```shell
$ python data_prepare.py
$ python data_split.py
$ python train.py --model CNN --person false
```
#### Person data split
Using a person data split results in lower training accuracy but better
performance on new data.
```shell
$ python data_prepare.py
$ python data_split_person.py
$ python train.py --model CNN --person true
```
#### Model type
In the `--model` argument, you can provide `CNN` or `LSTM`. The CNN model has a
smaller size and lower latency.
## Collecting new data
To obtain new training data using the
[SparkFun Edge development board](https://sparkfun.com/products/15170), you can
modify one of the examples in the [SparkFun Edge BSP](https://github.com/sparkfun/SparkFun_Edge_BSP)
and deploy it using the Ambiq SDK.
### Install the Ambiq SDK and SparkFun Edge BSP
Follow SparkFun's
[Using SparkFun Edge Board with Ambiq Apollo3 SDK](https://learn.sparkfun.com/tutorials/using-sparkfun-edge-board-with-ambiq-apollo3-sdk/all)
guide to set up the Ambiq SDK and SparkFun Edge BSP.
#### Modify the example code
First, `cd` into
`AmbiqSuite-Rel2.2.0/boards/SparkFun_Edge_BSP/examples/example1_edge_test`.
##### Modify `src/tf_adc/tf_adc.c`
Add `true` in line 62 as the second parameter of function
`am_hal_adc_samples_read`.
##### Modify `src/main.c`
Add the line below in `int main(void)`, just before the line `while(1)`:
```cc
am_util_stdio_printf("-,-,-\r\n");
```
Change the following lines in `while(1){...}`
```cc
am_util_stdio_printf("Acc [mg] %04.2f x, %04.2f y, %04.2f z, Temp [deg C] %04.2f, MIC0 [counts / 2^14] %d\r\n", acceleration_mg[0], acceleration_mg[1], acceleration_mg[2], temperature_degC, (audioSample) );
```
to:
```cc
am_util_stdio_printf("%04.2f,%04.2f,%04.2f\r\n", acceleration_mg[0], acceleration_mg[1], acceleration_mg[2]);
```
#### Flash the binary
Follow the instructions in
[SparkFun's guide](https://learn.sparkfun.com/tutorials/using-sparkfun-edge-board-with-ambiq-apollo3-sdk/all#example-applications)
to flash the binary to the device.
#### Collect accelerometer data
First, in a new terminal window, run the following command to begin logging
output to `output.txt`:
```shell
$ script output.txt
```
Next, in the same window, use `screen` to connect to the device:
```shell
$ screen ${DEVICENAME} 115200
```
Output information collected from accelerometer sensor will be shown on the
screen and saved in `output.txt`, in the format of "x,y,z" per line.
Press the `RST` button to start capturing a new gesture, then press Button 14
when it ends. New data will begin with a line "-,-,-".
To exit `screen`, hit +Ctrl\\+A+, immediately followed by the +K+ key,
then hit the +Y+ key. Then run
```shell
$ exit
```
to stop logging data. Data will be saved in `output.txt`. For compatibility
with the training scripts, change the file name to include person's name and
the gesture name, in the following format:
```
output_{gesture_name}_{person_name}.txt
```
#### Edit and run the scripts
Edit the following files to include your new gesture names (replacing
"wing", "ring", and "slope")
- `data_load.py`
- `data_prepare.py`
- `data_split.py`
Edit the following files to include your new person names (replacing "hyw",
"shiyun", "tangsy", "dengyl", "jiangyh", "xunkai", "lsj", "pengxl", "liucx",
and "zhangxy"):
- `data_prepare.py`
- `data_split_person.py`
Finally, run the commands described earlier to train a new model.
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
嵌入式优质项目,资源经过严格测试可直接运行成功且功能正常的情况才上传,可轻松copy复刻,拿到资料包后可轻松复现出一样的项目。 本人单片机开发经验充足,深耕嵌入式领域,有任何使用问题欢迎随时与我联系,我会及时为你解惑,提供帮助。 【资源内容】:包含完整源码+工程文件+说明,项目具体内容可查看下方的资源详情。 【附带帮助】: 若还需要嵌入式物联网单片机相关领域开发工具、学习资料等,我会提供帮助,提供资料,鼓励学习进步。 【本人专注嵌入式领域】: 有任何使用问题欢迎随时与我联系,我会及时解答,第一时间为你提供帮助,CSDN博客端可私信,为你解惑,欢迎交流。 【建议小白】: 在所有嵌入式开发中硬件部分若不会画PCB/电路,可选择根据引脚定义将其代替为面包板+杜邦线+外设模块的方式,只需轻松简单连线,下载源码烧录进去便可轻松复刻出一样的项目 【适合场景】: 相关项目设计中,皆可应用在项目开发、毕业设计、课程设计、期末/期中/大作业、工程实训、大创等学科竞赛比赛、初期项目立项、学习/练手等方面中 可借鉴此优质项目实现复刻,也可以基于此项目进行扩展来开发出更多功能
资源推荐
资源详情
资源评论
收起资源包目录
基于tinyML在arduino芯片上实现的篮球投篮手势鉴别.zip(毕设/课设/竞赛/实训/项目开发) (154个子文件)
magic_wand_test.cc 5KB
magic_wand_model_data.cpp 80KB
sparkfun_edge_accelerometer_handler.cpp 7KB
arduino_accelerometer_handler.cpp 4KB
sparkfun_edge_output_handler.cpp 3KB
gesture_predictor.cpp 3KB
arduino_output_handler.cpp 2KB
debug_log.cpp 1KB
arduino_main.cpp 869B
.gitignore 47B
.gitignore 47B
constants.h 2KB
main_functions.h 1KB
magic_wand_model_data.h 1KB
accelerometer_handler.h 1KB
output_handler.h 1KB
gesture_predictor.h 930B
weights.h5 40KB
train.iml 509B
MY-TF.iml 441B
magic_wand.ino 6KB
train_magic_wand_model.ipynb 7KB
mw3_z.jpg 246KB
mw3_y.jpg 233KB
lyl_y_collect.jpg 232KB
lyl_x_collect.jpg 231KB
lyl_z_collect.jpg 226KB
mw3_x.jpg 221KB
lyl_y.jpg 219KB
lyl_z.jpg 219KB
xzy_x.jpg 217KB
lyl_x.jpg 216KB
xzy_z.jpg 212KB
xzy_y.jpg 211KB
mxd_x.jpg 211KB
xmh_x.jpg 201KB
mxd_z.jpg 200KB
xmh_y.jpg 198KB
mxd_y.jpg 196KB
check_augment_xljr.jpg 192KB
ljr_x.jpg 189KB
xmh_z.jpg 184KB
check_augment_xmxd.jpg 181KB
check_augment_xxmh.jpg 179KB
ljr_z.jpg 177KB
xmh_y.jpg 176KB
ljr_x.jpg 175KB
check_augment_x.jpg 175KB
mxd_y.jpg 175KB
mxd_x.jpg 174KB
ljr_y.jpg 173KB
mxd_z.jpg 173KB
xmh_z.jpg 172KB
ljr_y.jpg 170KB
ljr_z.jpg 169KB
lyl_z.jpg 168KB
check_augment_xlyl.jpg 168KB
lyl_x.jpg 168KB
xzy_x.jpg 164KB
xmh_x.jpg 162KB
lyl_y.jpg 160KB
xzy_z.jpg 160KB
check_augment_xxzy.jpg 158KB
mw1_z.jpg 157KB
mw2_x.jpg 156KB
xzy_y.jpg 155KB
mw2_z.jpg 155KB
check_augment_x_ssxzy.jpg 150KB
mw1_y.jpg 149KB
check_merge.jpg 142KB
mw1_x.jpg 137KB
mw2_y.jpg 127KB
README.md 6KB
README.md 790B
20231138-林一凌-基于TinyML实现对不同投篮姿势的识别.pdf 636KB
基于微型嵌入式设备的投篮辅助.pptx 5.18MB
train.py 8KB
data_prepare.py 7KB
data_view.py 6KB
data_load.py 5KB
aug_test.py 4KB
parse_data.py 4KB
data_split.py 3KB
data_split_person.py 3KB
data_augmentation.py 3KB
data_produce.py 2KB
handle_data.py 1KB
sum.py 1KB
generateFinal.py 1KB
main.py 0B
data_view.cpython-39.pyc 4KB
data_prepare.cpython-39.pyc 3KB
former.rar 120KB
train.rar 21KB
output_shoot_liujr.txt 67KB
output_lm_lium.txt 65KB
output_shoot_xumh.txt 58KB
output_shoot_maxd.txt 58KB
output_shoot_xuzy.txt 58KB
output_shoot_linyl.txt 58KB
共 154 条
- 1
- 2
资源评论
阿齐Archie
- 粉丝: 1w+
- 资源: 2303
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功