**All scripts assume you are in the mlengine folder.**
## Train locally
```bash
python trainer/task.py
```
or
```bash
gcloud ml-engine local train --module-name trainer.task --package-path trainer
```
or with custom hyperparameters
```bash
gcloud ml-engine local train --module-name trainer.task --package-path trainer -- --hp-iterations 3000 --hp-dropout 0.5
```
Do not forget the empty -- between gcloud parameters and your own parameters. The list of tunable hyperparameters is displayed on each run. Check the code to add more.
## Train in the cloud
(jobXXX, jobs/jobXXX, <project> and <bucket> must be replaced with your own values)
```bash
gcloud ml-engine jobs submit training jobXXX --job-dir gs://<bucket>/jobs/jobXXX --project <project> --config config.yaml --module-name trainer.task --package-path trainer --runtime-version 1.4
```
--runtime-version specifies the version of Tensorflow to use.
## Train in the cloud with hyperparameter tuning
(jobXXX, jobs/jobXXX, <project> and <bucket> must be replaced with your own values)
```bash
gcloud ml-engine jobs submit training jobXXX --job-dir gs://<bucket>/jobs/jobXXX --project <project> --config config-hptune6.yaml --module-name trainer.task --package-path trainer --runtime-version 1.4
```
--runtime-version specifies the version of Tensorflow to use.
## Predictions from the cloud
Use the Cloud ML Engine UI to create a model and a version from
the saved data from your training run.
You will find it in folder:
gs://<bucket>/jobs/jobXXX/export/Servo/XXXXXXXXXX
Set your version of the model as the default version, then
create the JSON payload. You can use the script:
```bash
python digits.py > digits.json
```
Then call the online predictions service, replacing <model_name> with the name you have assigned:
```bash
gcloud ml-engine predict --model <model_name> --json-instances digits.json
```
It should return a perfect scorecard:
| CLASSES | PREDICTIONS |
| ------------- | ------------- |
| 8 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0] |
| 7 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0] |
| 7 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0] |
| 5 | [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0] |
| 5 | [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0] |
## Local predictions
You can also simulate the prediction service locally, replace XXXXX with the # of your saved model:
```bash
gcloud ml-engine local predict --model-dir checkpoints/export/Servo/XXXXX --json-instances digits.json
```
---
### Misc.
You can read more about [batch norm here](../README_BATCHNORM.md).
If you want to experiment with TF Records, the standard Tensorflow
data format, you can run this script ((availble in the tensorflow distribution)
to reformat the MNIST dataset into TF Records. It is not necessary for this sample though.
```bash
python <YOUR-TF-DIR>/tensorflow/examples/how_tos/reading_data/convert_to_records.py --directory=data --validation_size=0
```
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
基于深度学习的手写数字识别.zip (51个子文件)
HandWrittenDigitsRecognition-master
mnist_2.0_five_layers_sigmoid.py 8KB
mnist_3.1_convolutional_bigger_dropout.py 8KB
mnist_2.1_five_layers_relu_lrdecay.py 7KB
mnist_4.0_batchnorm_five_layers_sigmoid.py 10KB
CONTRIBUTING.md 1KB
tensorflowvisu_digits.py 52KB
INSTALL.txt 2KB
tensorflowvisu.mplstyle 2KB
mnist_softmax_test.py 6KB
mnist_TF_layers.py 6KB
tensorflowvisu.py 16KB
mnist_4.1_batchnorm_five_layers_relu.py 11KB
mnist_3.0_convolutional.py 9KB
mnist_1.0_softmax.py 6KB
mnist_4.2_batchnorm_convolutional.py 17KB
mlengine
config-hptune-5.yaml 1KB
config-hptune-4.yaml 1KB
config-hptune-6.yaml 2KB
config-hptune-2.yaml 927B
digits.py 11KB
config.yaml 818B
trainer
__init__.py 625B
task.py 9KB
config-hptune-1.yaml 1KB
config-hptune-3.yaml 1KB
README.md 3KB
digits.json 19KB
TestProject
pic.bmp 2KB
Ui_MainWindow.py 3KB
run.py 217B
MainWindowC.py 5KB
DigitalMnistNum.py 2KB
mdlib
checkpoint 299B
md1-1000.index 159B
md1-1000.meta 214KB
调用路径.txt 222B
md2
checkpoint 311B
init-1000.index 1008B
init-1000.meta 272KB
init-1000.data-00000-of-00001 2.12MB
md3
checkpoint 311B
init-1000.index 1KB
init-1000.meta 276KB
init-1000.data-00000-of-00001 1.4MB
调用路径.txt 228B
md1-1000.data-00000-of-00001 31KB
md1
checkpoint 311B
init-1000.index 151B
init-1000.meta 215KB
init-1000.data-00000-of-00001 31KB
mnist_2.2_five_layers_relu_lrdecay_dropout.py 8KB
共 51 条
- 1
资源评论
博士僧小星
- 粉丝: 1923
- 资源: 5884
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- keil2 + proteus + 8051.exe
- 1961ee27df03bd4595d28e24b00dde4e_744c805f7e4fb4d40fa3f695bfbab035_8(1).c
- mediapipe-0.9.0.1-cp37-cp37m-win-amd64.whl.zip
- windows注册表编辑工具
- mediapipe-0.9.0.1-cp37-cp37m-win-amd64.whl.zip
- 校园通行码预约管理系统20240522075502
- 车类型数据集6250张VOC+YOLO格式.zip
- The PyTorch implementation of STGCN.STGCN-main.zip
- 092300108.cpp
- 车类型数据集6000张VOC+YOLO格式.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功