DeepLearnToolbox
================
A Matlab toolbox for Deep Learning.
Deep Learning is a new subfield of machine learning that focuses on learning deep hierarchical models of data.
It is inspired by the human brain's apparent deep (layered, hierarchical) architecture.
A good overview of the theory of Deep Learning theory is
[Learning Deep Architectures for AI](http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf)
For a more informal introduction, see the following videos by Geoffrey Hinton and Andrew Ng.
* [The Next Generation of Neural Networks](http://www.youtube.com/watch?v=AyzOUbkUf3M) (Hinton, 2007)
* [Recent Developments in Deep Learning](http://www.youtube.com/watch?v=VdIURAu1-aU) (Hinton, 2010)
* [Unsupervised Feature Learning and Deep Learning](http://www.youtube.com/watch?v=ZmNOAtZIgIk) (Ng, 2011)
If you use this toolbox in your research please cite [Prediction as a candidate for learning deep hierarchical models of data](http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=6284)
```
@MASTERSTHESIS\{IMM2012-06284,
author = "R. B. Palm",
title = "Prediction as a candidate for learning deep hierarchical models of data",
year = "2012",
}
```
Contact: rasmusbergpalm at gmail dot com
Directories included in the toolbox
-----------------------------------
`NN/` - A library for Feedforward Backpropagation Neural Networks
`CNN/` - A library for Convolutional Neural Networks
`DBN/` - A library for Deep Belief Networks
`SAE/` - A library for Stacked Auto-Encoders
`CAE/` - A library for Convolutional Auto-Encoders
`util/` - Utility functions used by the libraries
`data/` - Data used by the examples
`tests/` - unit tests to verify toolbox is working
For references on each library check REFS.md
Setup
-----
1. Download.
2. addpath(genpath('DeepLearnToolbox'));
Everything is work in progress
------------------------------
Example: Deep Belief Network
---------------------
```matlab
function test_example_DBN
load mnist_uint8;
train_x = double(train_x) / 255;
test_x = double(test_x) / 255;
train_y = double(train_y);
test_y = double(test_y);
%% ex1 train a 100 hidden unit RBM and visualize its weights
rng(0);
dbn.sizes = [100];
opts.numepochs = 1;
opts.batchsize = 100;
opts.momentum = 0;
opts.alpha = 1;
dbn = dbnsetup(dbn, train_x, opts);
dbn = dbntrain(dbn, train_x, opts);
figure; visualize(dbn.rbm{1}.W'); % Visualize the RBM weights
%% ex2 train a 100-100 hidden unit DBN and use its weights to initialize a NN
rng(0);
%train dbn
dbn.sizes = [100 100];
opts.numepochs = 1;
opts.batchsize = 100;
opts.momentum = 0;
opts.alpha = 1;
dbn = dbnsetup(dbn, train_x, opts);
dbn = dbntrain(dbn, train_x, opts);
%unfold dbn to nn
nn = dbnunfoldtonn(dbn, 10);
nn.activation_function = 'sigm';
%train nn
opts.numepochs = 1;
opts.batchsize = 100;
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.10, 'Too big error');
```
Example: Stacked Auto-Encoders
---------------------
```matlab
function test_example_SAE
load mnist_uint8;
train_x = double(train_x)/255;
test_x = double(test_x)/255;
train_y = double(train_y);
test_y = double(test_y);
%% ex1 train a 100 hidden unit SDAE and use it to initialize a FFNN
% Setup and train a stacked denoising autoencoder (SDAE)
rng(0);
sae = saesetup([784 100]);
sae.ae{1}.activation_function = 'sigm';
sae.ae{1}.learningRate = 1;
sae.ae{1}.inputZeroMaskedFraction = 0.5;
opts.numepochs = 1;
opts.batchsize = 100;
sae = saetrain(sae, train_x, opts);
visualize(sae.ae{1}.W{1}(:,2:end)')
% Use the SDAE to initialize a FFNN
nn = nnsetup([784 100 10]);
nn.activation_function = 'sigm';
nn.learningRate = 1;
nn.W{1} = sae.ae{1}.W{1};
% Train the FFNN
opts.numepochs = 1;
opts.batchsize = 100;
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.16, 'Too big error');
%% ex2 train a 100-100 hidden unit SDAE and use it to initialize a FFNN
% Setup and train a stacked denoising autoencoder (SDAE)
rng(0);
sae = saesetup([784 100 100]);
sae.ae{1}.activation_function = 'sigm';
sae.ae{1}.learningRate = 1;
sae.ae{1}.inputZeroMaskedFraction = 0.5;
sae.ae{2}.activation_function = 'sigm';
sae.ae{2}.learningRate = 1;
sae.ae{2}.inputZeroMaskedFraction = 0.5;
opts.numepochs = 1;
opts.batchsize = 100;
sae = saetrain(sae, train_x, opts);
visualize(sae.ae{1}.W{1}(:,2:end)')
% Use the SDAE to initialize a FFNN
nn = nnsetup([784 100 100 10]);
nn.activation_function = 'sigm';
nn.learningRate = 1;
%add pretrained weights
nn.W{1} = sae.ae{1}.W{1};
nn.W{2} = sae.ae{2}.W{1};
% Train the FFNN
opts.numepochs = 1;
opts.batchsize = 100;
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.1, 'Too big error');
```
Example: Convolutional Neural Nets
---------------------
```matlab
function test_example_CNN
load mnist_uint8;
train_x = double(reshape(train_x',28,28,60000))/255;
test_x = double(reshape(test_x',28,28,10000))/255;
train_y = double(train_y');
test_y = double(test_y');
%% ex1 Train a 6c-2s-12c-2s Convolutional neural network
%will run 1 epoch in about 200 second and get around 11% error.
%With 100 epochs you'll get around 1.2% error
rng(0)
cnn.layers = {
struct('type', 'i') %input layer
struct('type', 'c', 'outputmaps', 6, 'kernelsize', 5) %convolution layer
struct('type', 's', 'scale', 2) %sub sampling layer
struct('type', 'c', 'outputmaps', 12, 'kernelsize', 5) %convolution layer
struct('type', 's', 'scale', 2) %subsampling layer
};
cnn = cnnsetup(cnn, train_x, train_y);
opts.alpha = 1;
opts.batchsize = 50;
opts.numepochs = 1;
cnn = cnntrain(cnn, train_x, train_y, opts);
[er, bad] = cnntest(cnn, test_x, test_y);
%plot mean squared error
figure; plot(cnn.rL);
assert(er<0.12, 'Too big error');
```
Example: Neural Networks
---------------------
```matlab
function test_example_NN
load mnist_uint8;
train_x = double(train_x) / 255;
test_x = double(test_x) / 255;
train_y = double(train_y);
test_y = double(test_y);
% normalize
[train_x, mu, sigma] = zscore(train_x);
test_x = normalize(test_x, mu, sigma);
%% ex1 vanilla neural net
rng(0);
nn = nnsetup([784 100 10]);
opts.numepochs = 1; % Number of full sweeps through data
opts.batchsize = 100; % Take a mean gradient step over this many samples
[nn, L] = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.08, 'Too big error');
% Make an artificial one and verify that we can predict it
x = zeros(1,28,28);
x(:, 14:15, 6:22) = 1;
x = reshape(x,1,28^2);
figure; visualize(x');
predicted = nnpredict(nn,x)-1;
assert(predicted == 1);
%% ex2 neural net with L2 weight decay
rng(0);
nn = nnsetup([784 100 10]);
nn.weightPenaltyL2 = 1e-4; % L2 weight decay
opts.numepochs = 1; % Number of full sweeps through data
opts.batchsize = 100; % Take a mean gradient step over this many samples
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.1, 'Too big error');
%% ex3 neural net with dropout
rng(0);
nn = nnsetup([784 100 10]);
nn.dropoutFraction = 0.5; % Dropout fraction
opts.numepochs = 1; % Number of full sweeps through data
opts.batchsize = 100; % Take a mean gradient step over this many samples
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.1, 'Too big error');
%% ex4 neural net with sigmoid activation function
rng(0);
nn = nnsetup([784 100 10]);
nn.activation_function = 'sigm'; % Sigmoid activation function
nn.learningRate = 1; % Sigm require a lower learning rate
opts.numepochs = 1; % Number of full sweeps through data
opts.batchsize = 100; % Take a mean gradient step over this
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
Matlab深度学习工具箱 (578个子文件)
references.bib 5KB
svmtrain.c 12KB
svmpredict.c 10KB
svm-train.c 9KB
svm-scale.c 8KB
svm_model_matlab.c 8KB
interface.c 6KB
svm-predict.c 5KB
libsvmread.c 4KB
libsvmwrite.c 2KB
main.c 398B
COPYING 735B
COPYRIGHT 1KB
svm.cpp 64KB
normalize_cpu.cpp 12KB
roipooling_cpu.cpp 12KB
svm-toy.cpp 11KB
pooling_cpu.cpp 10KB
bnorm_cpu.cpp 10KB
callbacks.cpp 10KB
svm-toy.cpp 10KB
tinythread.cpp 8KB
im2row_cpu.cpp 8KB
imread_libjpeg.cpp 6KB
bilinearsampler_cpu.cpp 6KB
imread_gdiplus.cpp 6KB
imread_quartz.cpp 5KB
subsample_cpu.cpp 3KB
imread.cpp 1KB
nnbias.cpp 1KB
copy_cpu.cpp 1KB
vl_nnbilinearsampler.cpp 122B
nnfullyconnected.cpp 121B
vl_nnnormalize.cpp 116B
vl_imreadjpeg.cpp 115B
vl_imreadjpeg_old.cpp 115B
vl_nnroipool.cpp 114B
vl_taccummex.cpp 114B
nnbilinearsampler.cpp 113B
nnroipooling.cpp 113B
vl_cudatool.cpp 113B
vl_nnbnorm.cpp 112B
nnsubsample.cpp 112B
vl_nnconvt.cpp 112B
vl_nnconv.cpp 111B
nnnormalize.cpp 111B
vl_nnpool.cpp 111B
vl_tmove.cpp 110B
datamex.cpp 109B
nnpooling.cpp 107B
data.cpp 106B
nnbnorm.cpp 103B
nnconv.cpp 101B
fixes.css 3KB
base.css 2KB
vl_tmove.cu 76KB
bnorm_gpu.cu 45KB
vl_imreadjpeg.cu 43KB
nnconv_cudnn.cu 23KB
im2row_gpu.cu 20KB
vl_nnconv.cu 17KB
nnbnorm_cudnn.cu 17KB
data.cu 14KB
pooling_gpu.cu 13KB
vl_imreadjpeg_old.cu 13KB
vl_nnconvt.cu 13KB
datamex.cu 13KB
roipooling_gpu.cu 13KB
nnbilinearsampler_cudnn.cu 12KB
bilinearsampler_gpu.cu 12KB
datacu.cu 10KB
vl_nnbnorm.cu 10KB
vl_nnpool.cu 9KB
nnpooling_cudnn.cu 9KB
nnconv.cu 9KB
nnbias_cudnn.cu 8KB
vl_nnroipool.cu 8KB
nnbnorm.cu 8KB
nnfullyconnected.cu 8KB
nnsubsample.cu 7KB
vl_nnbilinearsampler.cu 7KB
nnpooling.cu 6KB
normalize_gpu.cu 5KB
vl_taccummex.cu 5KB
vl_nnnormalize.cu 5KB
subsample_gpu.cu 5KB
nnroipooling.cu 5KB
nnbilinearsampler.cu 5KB
nnnormalize.cu 4KB
vl_cudatool.cu 4KB
nnbias.cu 4KB
copy_gpu.cu 1KB
sharedmem.cuh 4KB
svm.def 477B
googlenet_prototxt_patch.diff 2KB
libsvm.dll 253KB
svm-train.exe 243KB
svm-toy.exe 222KB
svm-predict.exe 208KB
svm-scale.exe 162KB
共 578 条
- 1
- 2
- 3
- 4
- 5
- 6
资源评论
MADMAXS
- 粉丝: 4
- 资源: 10
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功