# 自己打造一个深度学习框架 for java
### 前言
从16年开始利用空余时间研究深度学习的方面,由于工作的原因,最熟悉的编程语言就是java,所以框架的编程语言自然而然就使用了java。自己打造框架的初衷就是为了更加深入了解各个算法、模型、实现的原理和思路。
## 框架介绍
Omega-AI:基于java打造的深度学习框架,帮助你快速搭建神经网络,实现训练或测试模型,支持多线程运算,框架目前支持BP神经网络和卷积神经网络的构建。
### 源码地址:
[https://gitee.com/iangellove/omega-ai](https://gitee.com/iangellove/omega-ai)
[https://github.com/iangellove/Omega-AI](https://github.com/iangellove/Omega-AI)
### 依赖
由于omega-engine-1.0.3加入了jcuda支持,所以1.0.3需要安装与jcuda版本对应的cuda,我在该项目中使用的是jcuda-11.2.0版本的包,那么我cuda需要安装11.2.x版本
### 系统参数
由于训练vgg16模型的参数比较庞大,所以在部署项目的时候需要对jvm内存进行调整.
调整事例如:-Xmx20480m -Xms20480m -Xmn10240m
### Demo展示
[基于卷积神经网络mnist手写数字识别](http://120.237.148.121:8011/mnist)
![在这里插入图片描述](https://img-blog.csdnimg.cn/b9b5846af6624bdf8f5d570c5052bc64.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3UwMTMyODMzMDQ=,size_1,color_FFFFFF,t_70#pic_center)
## 功能介绍
#### 支持的网络层类型:
Fullylayer 全连接层
ConvolutionLayer 卷积层
PoolingLayer 池化层
#### 激活函数层
SoftmaxLayer (softmax激活函)
ReluLayer
LeakyReluLayer
TanhLayer
SigmodLayer
#### 归一化层
BNLayer (Batch Normalization)
DropoutLayer
#### 优化器
Momentum
Adam
#### 训练器
BGDOptimizer (批量梯度下降法)
MBSGDOptimizer (小批量随机梯度下降)
SGDOptimizer(随机梯度下降算法)
#### 损失函数(loss function)
SquareLoss (平方差损失函数)
CrossEntropyLoss (交叉熵损失函数)
#### 学习率更新器(LearnRateUpdate)
NONE (固定学习率)
LR_DECAY (decay)
GD_GECAY (gd_decay)
CONSTANT(gd_decay)
RANDOM [Math.pow(RandomUtils.getInstance().nextFloat(), power) * this.lr]
POLY [this.lr * Math.pow((1.0f - (batchIndex * 1.0f / trainTime / dataSize * batchSize)), power)]
STEP [this.lr * Math.pow(this.scale, batchIndex / step)]
EXP [this.lr * Math.pow(this.gama, batchIndex)]
SIG [this.lr / (1 + Math.pow(Math.E, this.gama * (batchIndex - step)))]
#### 数据加载器
.bin (二进制数据文件)
.idx3-ubyte
.txt
## 使用说明
### 自带的数据集
iris(鸢尾花数据集)
mnist(手写数字数据集)
cifar_10 (cifar_10数据集)
### 数据集成绩
iris epoch:5 bp神经网络[3层全连接层] 测试数据集准确率100%
mnist epoch:10 alexnet 测试数据集准确率98.6%
cifar_10 epoch:50 alexnet 测试数据集准确率76.6%
cifar_10 epoch:50 vgg16 测试数据集准确率86.45%
cifar_10 epoch:300 resnet18 [batchSize:128,初始learningRate:0.1,learnRateUpdate:GD_GECAY,optimizer:adamw] 数据预处理[randomCrop,randomHorizontalFilp,cutout,normalize] 测试数据集准确率91.23%
## 事例代码
#### bp iris demo
```java
public void bpNetwork_iris() {
// TODO Auto-generated method stub
/**
* 读取训练数据集
*/
String iris_train = "/dataset/iris/iris.txt";
String iris_test = "/dataset/iris/iris_test.txt";
String[] labelSet = new String[] {"1","-1"};
DataSet trainData = DataLoader.loalDataByTxt(iris_train, ",", 1, 1, 4, 2,labelSet);
DataSet testData = DataLoader.loalDataByTxt(iris_test, ",", 1, 1, 4, 2,labelSet);
System.out.println("train_data:"+JsonUtils.toJson(trainData));
BPNetwork netWork = new BPNetwork(new SoftmaxWithCrossEntropyLoss());
InputLayer inputLayer = new InputLayer(1,1,4);
FullyLayer hidden1 = new FullyLayer(4, 40);
ReluLayer active1 = new ReluLayer();
FullyLayer hidden2 = new FullyLayer(40, 20);
ReluLayer active2 = new ReluLayer();
FullyLayer hidden3 = new FullyLayer(20, 2);
SoftmaxWithCrossEntropyLayer hidden4 = new SoftmaxWithCrossEntropyLayer(2);
netWork.addLayer(inputLayer);
netWork.addLayer(hidden1);
netWork.addLayer(active1);
netWork.addLayer(hidden2);
netWork.addLayer(active2);
netWork.addLayer(hidden3);
netWork.addLayer(hidden4);
try {
MBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 8, 0.00001d, 10, LearnRateUpdate.NONE);
optimizer.train(trainData);
optimizer.test(testData);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
````
#### cnn mnist demo
```java
public void cnnNetwork_mnist() {
// TODO Auto-generated method stub
try {
/**
* 读取训练数据集
*/
String mnist_train_data = "/dataset/mnist/train-images.idx3-ubyte";
String mnist_train_label = "/dataset/mnist/train-labels.idx1-ubyte";
String mnist_test_data = "/dataset/mnist/t10k-images.idx3-ubyte";
String mnist_test_label = "/dataset/mnist/t10k-labels.idx1-ubyte";
String[] labelSet = new String[] {"0","1","2","3","4","5","6","7","8","9"};
Resource trainDataRes = new ClassPathResource(mnist_train_data);
Resource trainLabelRes = new ClassPathResource(mnist_train_label);
Resource testDataRes = new ClassPathResource(mnist_test_data);
Resource testLabelRes = new ClassPathResource(mnist_test_label);
DataSet trainData = DataLoader.loadDataByUByte(trainDataRes.getFile(), trainLabelRes.getFile(), labelSet, 1, 1 , 784, true);
DataSet testData = DataLoader.loadDataByUByte(testDataRes.getFile(), testLabelRes.getFile(), labelSet, 1, 1 , 784, true);
int channel = 1;
int height = 28;
int width = 28;
CNN netWork = new CNN(new SoftmaxWithCrossEntropyLoss(), UpdaterType.momentum);
netWork.learnRate = 0.001d;
InputLayer inputLayer = new InputLayer(channel, 1, 784);
ConvolutionLayer conv1 = new ConvolutionLayer(channel, 6, width, height, 5, 5, 2, 1, false);
BNLayer bn1 = new BNLayer();
LeakyReluLayer active1 = new LeakyReluLayer();
PoolingLayer pool1 = new PoolingLayer(conv1.oChannel, conv1.oWidth, conv1.oHeight, 2, 2, 2, PoolingType.MAX_POOLING);
ConvolutionLayer conv2 = new ConvolutionLayer(pool1.oChannel, 12, pool1.oWidth, pool1.oHeight, 5, 5, 0, 1, false);
BNLayer bn2 = new BNLayer();
LeakyReluLayer active2 = new LeakyReluLayer();
DropoutLayer drop1 = new DropoutLayer(0.5d);
PoolingLayer pool2 = new PoolingLayer(conv2.oChannel, conv2.oWidth, conv2.oHeight, 2, 2, 2, PoolingType.MAX_POOLING);
int fInputCount = pool2.oChannel * pool2.oWidth * pool2.oHeight;
int inputCount = (int) (Math.sqrt((fInputCount) + 10) + 10);
FullyLayer full1 = new FullyLayer(fInputCount, inputCount, false);
BNLayer bn3 = new BNLayer();
LeakyReluLayer active3 = new LeakyReluLayer();
FullyLayer full2 = new FullyLayer(inputCount, 10);
SoftmaxWithCrossEntropyLayer softmax = new SoftmaxWithCrossEntropyLayer(10);
netWork.addLayer(inputLayer);
netWork.addLayer(conv1);
netWork.addLayer(bn1);
netWork.addLayer(active1);
netWork.addLayer(pool1);
netWork.addLayer(conv2);
netWork.addLayer(bn2);
netWork.addLayer(active2);
netWork.addLayer(drop1);
netWork.addLayer(pool2);
netWork.addLayer(full1);
netWork.addLayer(bn3);
netWork.addLayer(active3);
netWork.addLayer(full2);
netWork.addLayer(softmax);
MBSGDOptimizer optimizer = new MBSGDOptimizer(netWork, 10, 0.0001d, 96, LearnRateUpdate.NONE);
long start = System.currentTimeMillis();
optimizer.train(trainData);
optimizer.test(testData);
System.out.println(((System.currentTimeMillis() - start) / 1000) + "s.");
} catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
Omega-AI:基于java打造的深度学习框架 (856个子文件)
data_batch_3.bin 29.31MB
data_batch_2.bin 29.31MB
data_batch_5.bin 29.31MB
data_batch_1.bin 29.31MB
data_batch_4.bin 29.31MB
test_batch.bin 29.31MB
data_batch_2.bin 29.31MB
data_batch_3.bin 29.31MB
data_batch_5.bin 29.31MB
data_batch_4.bin 29.31MB
test_batch.bin 29.31MB
data_batch_1.bin 29.31MB
data_batch_4.bin 29.31MB
data_batch_1.bin 29.31MB
data_batch_5.bin 29.31MB
data_batch_2.bin 29.31MB
data_batch_3.bin 29.31MB
test_batch.bin 29.31MB
data_batch_3.bin 29.31MB
data_batch_2.bin 29.31MB
data_batch_5.bin 29.31MB
data_batch_1.bin 29.31MB
data_batch_4.bin 29.31MB
test_batch.bin 29.31MB
styles.css 6KB
style.css 2KB
demo.css 1KB
BNKernel.cu 17KB
BNKernel.cu 14KB
BNKernel2.cu 14KB
Im2colKernelTmp.cu 5KB
Im2colKernelTmp.cu 5KB
PoolingV2Kernel.cu 5KB
MathKernel.cu 4KB
MathKernel.cu 4KB
MathKernel2.cu 4KB
updater.cu 3KB
Col2imKernel.cu 3KB
Col2imKernel.cu 3KB
PoolingKernel.cu 3KB
PoolingKernel.cu 3KB
CrossEntropyKernel.cu 2KB
Im2colKernel.cu 2KB
Im2colKernel.cu 2KB
Im2colKernel.cu 2KB
SoftmaxKernel.cu 2KB
activeFunction.cu 2KB
SoftmaxKernel.cu 1KB
BiasKernel.cu 1KB
updater.cu 1KB
AVGPoolingKernel.cu 1005B
ShortcutKernel.cu 707B
JCudaVectorAddKernel.cu 218B
JCudaVectorAddKernel.cu 218B
JCudaVectorAddKernel.cu 218B
JCudaVectorAddKernel.cu 181B
JCudaVectorAddKernel.cu 181B
JCudaVectorAddKernel.cu 181B
test_cuda.dll 141KB
test_cuda.dll 141KB
test_cuda.dll 135KB
.gitignore 368B
MNIST_train-images-idx3-ubyte.gz 9.45MB
MNIST_train-images-idx3-ubyte.gz 9.45MB
MNIST_train-images-idx3-ubyte.gz 9.45MB
MNIST_train-images-idx3-ubyte.gz 9.45MB
MNIST_t10k-images-idx3-ubyte.gz 1.57MB
MNIST_t10k-images-idx3-ubyte.gz 1.57MB
MNIST_t10k-images-idx3-ubyte.gz 1.57MB
MNIST_t10k-images-idx3-ubyte.gz 1.57MB
MNIST_train-labels-idx1-ubyte.gz 28KB
MNIST_train-labels-idx1-ubyte.gz 28KB
MNIST_train-labels-idx1-ubyte.gz 28KB
MNIST_train-labels-idx1-ubyte.gz 28KB
MNIST_t10k-labels-idx1-ubyte.gz 4KB
MNIST_t10k-labels-idx1-ubyte.gz 4KB
MNIST_t10k-labels-idx1-ubyte.gz 4KB
MNIST_t10k-labels-idx1-ubyte.gz 4KB
com_omega_engine_gpu_JNITest.h 1011B
com_omega_engine_gpu_JNITest.h 1011B
com_omega_engine_gpu_JNITest.h 1001B
AICar.html 22KB
AICar.html 22KB
AICar.html 22KB
AICar.html 22KB
AICarDefautMap.html 22KB
AICarDefautMap.html 22KB
AICarDefautMap.html 22KB
AICarDefautMap.html 22KB
origin2.html 17KB
origin2.html 17KB
origin2.html 17KB
origin2.html 17KB
MapEditor.html 14KB
MapEditor.html 14KB
MapEditor.html 14KB
MapEditor.html 14KB
origin.html 13KB
origin.html 13KB
origin.html 13KB
共 856 条
- 1
- 2
- 3
- 4
- 5
- 6
- 9
资源评论
地理探险家
- 粉丝: 990
- 资源: 5416
下载权益
C知道特权
VIP文章
课程特权
开通VIP
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功