---
title: LeNet MNIST Tutorial
description: Train and test "LeNet" on the MNIST handwritten digit data.
category: example
include_in_docs: true
priority: 1
---
# Training LeNet on MNIST with Caffe
We will assume that you have Caffe successfully compiled. If not, please refer to the [Installation page](/installation.html). In this tutorial, we will assume that your Caffe installation is located at `CAFFE_ROOT`.
## Prepare Datasets
You will first need to download and convert the data format from the MNIST website. To do this, simply run the following commands:
cd $CAFFE_ROOT
./data/mnist/get_mnist.sh
./examples/mnist/create_mnist.sh
If it complains that `wget` or `gunzip` are not installed, you need to install them respectively. After running the script there should be two datasets, `mnist_train_lmdb`, and `mnist_test_lmdb`.
## LeNet: the MNIST Classification Model
Before we actually run the training program, let's explain what will happen. We will use the [LeNet](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf) network, which is known to work well on digit classification tasks. We will use a slightly different version from the original LeNet implementation, replacing the sigmoid activations with Rectified Linear Unit (ReLU) activations for the neurons.
The design of LeNet contains the essence of CNNs that are still used in larger models such as the ones in ImageNet. In general, it consists of a convolutional layer followed by a pooling layer, another convolution layer followed by a pooling layer, and then two fully connected layers similar to the conventional multilayer perceptrons. We have defined the layers in `$CAFFE_ROOT/examples/mnist/lenet_train_test.prototxt`.
## Define the MNIST Network
This section explains the `lenet_train_test.prototxt` model definition that specifies the LeNet model for MNIST handwritten digit classification. We assume that you are familiar with [Google Protobuf](https://developers.google.com/protocol-buffers/docs/overview), and assume that you have read the protobuf definitions used by Caffe, which can be found at `$CAFFE_ROOT/src/caffe/proto/caffe.proto`.
Specifically, we will write a `caffe::NetParameter` (or in python, `caffe.proto.caffe_pb2.NetParameter`) protobuf. We will start by giving the network a name:
name: "LeNet"
### Writing the Data Layer
Currently, we will read the MNIST data from the lmdb we created earlier in the demo. This is defined by a data layer:
layer {
name: "mnist"
type: "Data"
data_param {
source: "mnist_train_lmdb"
backend: LMDB
batch_size: 64
scale: 0.00390625
}
top: "data"
top: "label"
}
Specifically, this layer has name `mnist`, type `data`, and it reads the data from the given lmdb source. We will use a batch size of 64, and scale the incoming pixels so that they are in the range \[0,1\). Why 0.00390625? It is 1 divided by 256. And finally, this layer produces two blobs, one is the `data` blob, and one is the `label` blob.
### Writing the Convolution Layer
Let's define the first convolution layer:
layer {
name: "conv1"
type: "Convolution"
param { lr_mult: 1 }
param { lr_mult: 2 }
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "data"
top: "conv1"
}
This layer takes the `data` blob (it is provided by the data layer), and produces the `conv1` layer. It produces outputs of 20 channels, with the convolutional kernel size 5 and carried out with stride 1.
The fillers allow us to randomly initialize the value of the weights and bias. For the weight filler, we will use the `xavier` algorithm that automatically determines the scale of initialization based on the number of input and output neurons. For the bias filler, we will simply initialize it as constant, with the default filling value 0.
`lr_mult`s are the learning rate adjustments for the layer's learnable parameters. In this case, we will set the weight learning rate to be the same as the learning rate given by the solver during runtime, and the bias learning rate to be twice as large as that - this usually leads to better convergence rates.
### Writing the Pooling Layer
Phew. Pooling layers are actually much easier to define:
layer {
name: "pool1"
type: "Pooling"
pooling_param {
kernel_size: 2
stride: 2
pool: MAX
}
bottom: "conv1"
top: "pool1"
}
This says we will perform max pooling with a pool kernel size 2 and a stride of 2 (so no overlapping between neighboring pooling regions).
Similarly, you can write up the second convolution and pooling layers. Check `$CAFFE_ROOT/examples/mnist/lenet_train_test.prototxt` for details.
### Writing the Fully Connected Layer
Writing a fully connected layer is also simple:
layer {
name: "ip1"
type: "InnerProduct"
param { lr_mult: 1 }
param { lr_mult: 2 }
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "pool2"
top: "ip1"
}
This defines a fully connected layer (known in Caffe as an `InnerProduct` layer) with 500 outputs. All other lines look familiar, right?
### Writing the ReLU Layer
A ReLU Layer is also simple:
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
Since ReLU is an element-wise operation, we can do *in-place* operations to save some memory. This is achieved by simply giving the same name to the bottom and top blobs. Of course, do NOT use duplicated blob names for other layer types!
After the ReLU layer, we will write another innerproduct layer:
layer {
name: "ip2"
type: "InnerProduct"
param { lr_mult: 1 }
param { lr_mult: 2 }
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "ip1"
top: "ip2"
}
### Writing the Loss Layer
Finally, we will write the loss!
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
}
The `softmax_loss` layer implements both the softmax and the multinomial logistic loss (that saves time and improves numerical stability). It takes two blobs, the first one being the prediction and the second one being the `label` provided by the data layer (remember it?). It does not produce any outputs - all it does is to compute the loss function value, report it when backpropagation starts, and initiates the gradient with respect to `ip2`. This is where all magic starts.
### Additional Notes: Writing Layer Rules
Layer definitions can include rules for whether and when they are included in the network definition, like the one below:
layer {
// ...layer definition...
include: { phase: TRAIN }
}
This is a rule, which controls layer inclusion in the network, based on current network's state.
You can refer to `$CAFFE_ROOT/src/caffe/proto/caffe.proto` for more information about layer rules and model schema.
In the above example, this layer will be included only in `TRAIN` phase.
If we change `TRAIN` with `TEST`, then this layer will be used only in test phase.
By default, that is without layer rules, a layer is always included in the network.
Thus, `lenet_train_test.prototxt` has two `DATA` layers defined (with different `batch_size`), one for the training phase and one for the testing phase.
Also, there is an `Accuracy` layer which is included only in `TEST` phase for reporting the model accuracy every 100 iteration, as defined in `lenet_solver.prototxt`.
## Define the MNIST Solver
Check out the c
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
C++用连接主义文本提议网络(ECCV'16)检测自然图像中的文本CTPN Detecting Text in Natural Image with Connectionist Text Proposal Network The codes are used for implementing CTPN for scene text detection, described in: Z. Tian, W. Huang, T. He, P. He and Y. Qiao: Detecting Text in Natural Image with Connectionist Text Proposal Network, ECCV, 2016.
资源推荐
资源详情
资源评论
收起资源包目录
C++用连接主义文本提议网络(ECCV'16)检测自然图像中的文本CTPN-master.zip (516个子文件)
gtest_main.cc 2KB
caffe.cloc 1KB
Utils.cmake 13KB
Cuda.cmake 10KB
Summary.cmake 7KB
Targets.cmake 7KB
FindLAPACK.cmake 7KB
Dependencies.cmake 5KB
ConfigGen.cmake 4KB
ProtoBuf.cmake 4KB
FindMKL.cmake 3KB
FindNumPy.cmake 2KB
gflags.cmake 2KB
Misc.cmake 2KB
FindMatlabMex.cmake 2KB
FindLevelDB.cmake 2KB
glog.cmake 2KB
FindAtlas.cmake 2KB
FindGFlags.cmake 2KB
FindOpenBLAS.cmake 2KB
lint.cmake 1KB
FindGlog.cmake 1KB
FindvecLib.cmake 1KB
FindLMDB.cmake 1KB
FindSnappy.cmake 1KB
CNAME 25B
gtest-all.cpp 329KB
test_net.cpp 72KB
test_upgrade_proto.cpp 69KB
test_pooling_layer.cpp 50KB
solver.cpp 46KB
test_gradient_based_solver.cpp 45KB
net.cpp 42KB
upgrade_proto.cpp 35KB
test_neuron_layer.cpp 30KB
test_convolution_layer.cpp 27KB
test_split_layer.cpp 26KB
caffe_.cpp 19KB
data_transformer.cpp 17KB
test_random_number_generator.cpp 17KB
window_data_layer.cpp 17KB
test_data_layer.cpp 15KB
blob.cpp 14KB
test_io.cpp 13KB
caffe.cpp 13KB
_caffe.cpp 13KB
parallel.cpp 12KB
math_functions.cpp 12KB
test_data_transformer.cpp 12KB
test_accuracy_layer.cpp 11KB
pooling_layer.cpp 11KB
base_conv_layer.cpp 11KB
lstm_layer.cpp 11KB
test_memory_data_layer.cpp 11KB
lrn_layer.cpp 11KB
test_reshape_layer.cpp 10KB
test_reduction_layer.cpp 9KB
test_blob.cpp 9KB
batch_norm_layer.cpp 9KB
test_lrn_layer.cpp 9KB
common.cpp 9KB
classification.cpp 8KB
test_math_functions.cpp 8KB
test_eltwise_layer.cpp 8KB
test_dummy_data_layer.cpp 7KB
io.cpp 7KB
test_filler.cpp 7KB
test_slice_layer.cpp 7KB
test_concat_layer.cpp 7KB
test_embed_layer.cpp 7KB
convert_mnist_data.cpp 7KB
spp_layer.cpp 7KB
test_image_data_layer.cpp 6KB
extract_features.cpp 6KB
image_data_layer.cpp 6KB
test_stochastic_pooling.cpp 6KB
layer_factory.cpp 6KB
insert_splits.cpp 6KB
test_deconvolution_layer.cpp 6KB
hdf5_data_layer.cpp 6KB
test_tile_layer.cpp 5KB
test_argmax_layer.cpp 5KB
eltwise_layer.cpp 5KB
test_inner_product_layer.cpp 5KB
hdf5.cpp 5KB
mvn_layer.cpp 5KB
test_contrastive_loss_layer.cpp 5KB
test_mvn_layer.cpp 5KB
roi_pooling_layer.cpp 5KB
test_power_layer.cpp 5KB
test_softmax_layer.cpp 5KB
inner_product_layer.cpp 5KB
convert_imageset.cpp 5KB
dummy_data_layer.cpp 5KB
prelu_layer.cpp 5KB
filter_layer.cpp 5KB
test_spp_layer.cpp 5KB
accuracy_layer.cpp 5KB
test_util_blas.cpp 5KB
test_maxpool_dropout_layers.cpp 5KB
共 516 条
- 1
- 2
- 3
- 4
- 5
- 6
资源评论
智鹿空间
- 粉丝: 8
- 资源: 548
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功