---
title: LeNet MNIST Tutorial
description: Train and test "LeNet" on the MNIST handwritten digit data.
category: example
include_in_docs: true
priority: 1
---
# Training LeNet on MNIST with Caffe
We will assume that you have Caffe successfully compiled. If not, please refer to the [Installation page](/installation.html). In this tutorial, we will assume that your Caffe installation is located at `CAFFE_ROOT`.
## Prepare Datasets
You will first need to download and convert the data format from the MNIST website. To do this, simply run the following commands:
cd $CAFFE_ROOT
./data/mnist/get_mnist.sh
./examples/mnist/create_mnist.sh
If it complains that `wget` or `gunzip` are not installed, you need to install them respectively. After running the script there should be two datasets, `mnist_train_lmdb`, and `mnist_test_lmdb`.
## LeNet: the MNIST Classification Model
Before we actually run the training program, let's explain what will happen. We will use the [LeNet](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf) network, which is known to work well on digit classification tasks. We will use a slightly different version from the original LeNet implementation, replacing the sigmoid activations with Rectified Linear Unit (ReLU) activations for the neurons.
The design of LeNet contains the essence of CNNs that are still used in larger models such as the ones in ImageNet. In general, it consists of a convolutional layer followed by a pooling layer, another convolution layer followed by a pooling layer, and then two fully connected layers similar to the conventional multilayer perceptrons. We have defined the layers in `$CAFFE_ROOT/examples/mnist/lenet_train_test.prototxt`.
## Define the MNIST Network
This section explains the `lenet_train_test.prototxt` model definition that specifies the LeNet model for MNIST handwritten digit classification. We assume that you are familiar with [Google Protobuf](https://developers.google.com/protocol-buffers/docs/overview), and assume that you have read the protobuf definitions used by Caffe, which can be found at `$CAFFE_ROOT/src/caffe/proto/caffe.proto`.
Specifically, we will write a `caffe::NetParameter` (or in python, `caffe.proto.caffe_pb2.NetParameter`) protobuf. We will start by giving the network a name:
name: "LeNet"
### Writing the Data Layer
Currently, we will read the MNIST data from the lmdb we created earlier in the demo. This is defined by a data layer:
layer {
name: "mnist"
type: "Data"
transform_param {
scale: 0.00390625
}
data_param {
source: "mnist_train_lmdb"
backend: LMDB
batch_size: 64
}
top: "data"
top: "label"
}
Specifically, this layer has name `mnist`, type `data`, and it reads the data from the given lmdb source. We will use a batch size of 64, and scale the incoming pixels so that they are in the range \[0,1\). Why 0.00390625? It is 1 divided by 256. And finally, this layer produces two blobs, one is the `data` blob, and one is the `label` blob.
### Writing the Convolution Layer
Let's define the first convolution layer:
layer {
name: "conv1"
type: "Convolution"
param { lr_mult: 1 }
param { lr_mult: 2 }
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "data"
top: "conv1"
}
This layer takes the `data` blob (it is provided by the data layer), and produces the `conv1` layer. It produces outputs of 20 channels, with the convolutional kernel size 5 and carried out with stride 1.
The fillers allow us to randomly initialize the value of the weights and bias. For the weight filler, we will use the `xavier` algorithm that automatically determines the scale of initialization based on the number of input and output neurons. For the bias filler, we will simply initialize it as constant, with the default filling value 0.
`lr_mult`s are the learning rate adjustments for the layer's learnable parameters. In this case, we will set the weight learning rate to be the same as the learning rate given by the solver during runtime, and the bias learning rate to be twice as large as that - this usually leads to better convergence rates.
### Writing the Pooling Layer
Phew. Pooling layers are actually much easier to define:
layer {
name: "pool1"
type: "Pooling"
pooling_param {
kernel_size: 2
stride: 2
pool: MAX
}
bottom: "conv1"
top: "pool1"
}
This says we will perform max pooling with a pool kernel size 2 and a stride of 2 (so no overlapping between neighboring pooling regions).
Similarly, you can write up the second convolution and pooling layers. Check `$CAFFE_ROOT/examples/mnist/lenet_train_test.prototxt` for details.
### Writing the Fully Connected Layer
Writing a fully connected layer is also simple:
layer {
name: "ip1"
type: "InnerProduct"
param { lr_mult: 1 }
param { lr_mult: 2 }
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "pool2"
top: "ip1"
}
This defines a fully connected layer (known in Caffe as an `InnerProduct` layer) with 500 outputs. All other lines look familiar, right?
### Writing the ReLU Layer
A ReLU Layer is also simple:
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
Since ReLU is an element-wise operation, we can do *in-place* operations to save some memory. This is achieved by simply giving the same name to the bottom and top blobs. Of course, do NOT use duplicated blob names for other layer types!
After the ReLU layer, we will write another innerproduct layer:
layer {
name: "ip2"
type: "InnerProduct"
param { lr_mult: 1 }
param { lr_mult: 2 }
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "ip1"
top: "ip2"
}
### Writing the Loss Layer
Finally, we will write the loss!
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
}
The `softmax_loss` layer implements both the softmax and the multinomial logistic loss (that saves time and improves numerical stability). It takes two blobs, the first one being the prediction and the second one being the `label` provided by the data layer (remember it?). It does not produce any outputs - all it does is to compute the loss function value, report it when backpropagation starts, and initiates the gradient with respect to `ip2`. This is where all magic starts.
### Additional Notes: Writing Layer Rules
Layer definitions can include rules for whether and when they are included in the network definition, like the one below:
layer {
// ...layer definition...
include: { phase: TRAIN }
}
This is a rule, which controls layer inclusion in the network, based on current network's state.
You can refer to `$CAFFE_ROOT/src/caffe/proto/caffe.proto` for more information about layer rules and model schema.
In the above example, this layer will be included only in `TRAIN` phase.
If we change `TRAIN` with `TEST`, then this layer will be used only in test phase.
By default, that is without layer rules, a layer is always included in the network.
Thus, `lenet_train_test.prototxt` has two `DATA` layers defined (with different `batch_size`), one for the training phase and one for the testing phase.
Also, there is an `Accuracy` layer which is included only in `TEST` phase for reporting the model accuracy every 100 iteration, as defined in `lenet_solver.prototxt`.
## Define t
没有合适的资源?快使用搜索试试~ 我知道了~
openpose-1.7.0-src-ubuntu含模型文件
共1551个文件
cpp:372个
hpp:311个
md:138个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 33 浏览量
2023-08-24
21:37:55
上传
评论
收藏 947.52MB ZIP 举报
温馨提示
这个是可以ubuntu上编译GPU版本openpose-1.7.0源码,测试通过环境为ubuntu18.04及以上系统,cudnn10.0+cudnn7.6.5/cudnn11.1.1+cudnn8.2.0.需要安装好环境才能编译。购买资源前请务必联系我说明一下环境准备的问题
资源推荐
资源详情
资源评论
收起资源包目录
openpose-1.7.0-src-ubuntu含模型文件 (1551个子文件)
video.avi 1.33MB
getModels.bat 2KB
getSpinnaker.bat 623B
getCaffe3rdparty.bat 616B
getOpenCV.bat 613B
getFreeglut.bat 612B
getCaffe.bat 600B
caffe 2KB
pose_iter_440000.caffemodel 199.58MB
pose_iter_160000.caffemodel 196.41MB
pose_iter_116000.caffemodel 146.6MB
pose_iter_102000.caffemodel 140.52MB
pose_iter_584000.caffemodel 99.86MB
gtest_main.cc 2KB
setup.cfg 245B
caffe.cloc 1KB
Utils.cmake 13KB
Cuda.cmake 13KB
Cuda.cmake 13KB
pybind11Tools.cmake 9KB
FindPythonLibsNew.cmake 8KB
Summary.cmake 8KB
FindLIBIGL.cmake 7KB
Dependencies.cmake 7KB
Targets.cmake 7KB
FindLAPACK.cmake 7KB
ProtoBuf.cmake 4KB
FindMKL.cmake 3KB
FindEigen3.cmake 3KB
ConfigGen.cmake 2KB
FindNumPy.cmake 2KB
FindCuDNN.cmake 2KB
Utils.cmake 2KB
FindCatch.cmake 2KB
gflags.cmake 2KB
Misc.cmake 2KB
glog.cmake 2KB
FindMatlabMex.cmake 2KB
FindLevelDB.cmake 2KB
FindAtlas.cmake 2KB
FindOpenBLAS.cmake 2KB
FindGFlags.cmake 2KB
FindGFlags.cmake 2KB
lint.cmake 1KB
FindGlog.cmake 1KB
FindGlog.cmake 1KB
FindvecLib.cmake 1KB
FindLMDB.cmake 1KB
FindSnappy.cmake 1KB
FindNCCL.cmake 654B
FindCaffe.cmake 651B
FindSpinnaker.cmake 433B
CNAME 25B
config 496B
config 321B
config 305B
COPYING 34KB
gtest-all.cpp 329KB
cameraParameterEstimation.cpp 126KB
bodyPartConnectorBase.cpp 82KB
test_net.cpp 79KB
test_upgrade_proto.cpp 72KB
spinnakerWrapper.cpp 54KB
test_pooling_layer.cpp 50KB
test_gradient_based_solver.cpp 44KB
test_convolution_layer.cpp 43KB
upgrade_proto.cpp 42KB
poseExtractorCaffe.cpp 40KB
test_neuron_layer.cpp 40KB
net.cpp 39KB
poseParameters.cpp 32KB
keypoint.cpp 29KB
openpose_python.cpp 27KB
unityBinding.cpp 26KB
gridPatternFunctions.cpp 25KB
test_split_layer.cpp 25KB
resizeAndMergeBaseCL.cpp 25KB
handExtractorCaffe.cpp 24KB
personTracker.cpp 24KB
opencl.cpp 24KB
arrayCpuGpu.cpp 24KB
gui3D.cpp 23KB
test_methods_and_attributes.cpp 23KB
test_deconvolution_layer.cpp 22KB
caffe_.cpp 21KB
test_scale_layer.cpp 21KB
personIdExtractor.cpp 21KB
_caffe.cpp 21KB
array.cpp 20KB
nmsBaseCL.cpp 20KB
producer.cpp 19KB
test_bias_layer.cpp 19KB
solver.cpp 18KB
data_transformer.cpp 18KB
test_virtual_functions.cpp 17KB
18_synchronous_custom_all_and_datum.cpp 17KB
cameraParameterReader.cpp 17KB
bodyPartConnectorCaffe.cpp 17KB
test_random_number_generator.cpp 17KB
window_data_layer.cpp 17KB
共 1551 条
- 1
- 2
- 3
- 4
- 5
- 6
- 16
资源评论
FL1623863129
- 粉丝: 1w+
- 资源: 1万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功