# AD-Prediction
Convolutional Neural Networks for Alzheimer's Disease Prediction Using Brain MRI Image
## Abstract
Alzheimers disease (AD) is characterized by severe memory loss and cognitive impairment. It associates with significant brain structure changes, which can be measured by magnetic resonance imaging (MRI) scan. The observable preclinical structure changes provides an opportunity for AD early detection using image classification tools, like convolutional neural network (CNN). However, currently most AD related studies were limited by sample size. Finding an efficient way to train image classifier on limited data is critical. In our project, we explored different transfer-learning methods based on CNN for AD prediction brain structure MRI image. We find that both pretrained 2D AlexNet with 2D-representation method and simple neural network with pretrained 3D autoencoder improved the prediction performance comparing to a deep CNN trained from scratch. The pretrained 2D AlexNet performed even better (**86%**) than the 3D CNN with autoencoder (**77%**).
## Method
#### 1. Data
In this project, we used public brain MRI data from **Alzheimers Disease Neuroimaging Initiative (ADNI)** Study. ADNI is an ongoing, multicenter cohort study, started from 2004. It focuses on understanding the diagnostic and predictive value of Alzheimers disease specific biomarkers. The ADNI study has three phases: ADNI1, ADNI-GO, and ADNI2. Both ADNI1 and ADNI2 recruited new AD patients and normal control as research participants. Our data included a total of 686 structure MRI scans from both ADNI1 and ADNI2 phases, with 310 AD cases and 376 normal controls. We randomly derived the total sample into training dataset (n = 519), validation dataset (n = 100), and testing dataset (n = 67).
#### 2. Image preprocessing
Image preprocessing were conducted using Statistical Parametric Mapping (SPM) software, version 12. The original MRI scans were first skull-stripped and segmented using segmentation algorithm based on 6-tissue probability mapping and then normalized to the International Consortium for Brain Mapping template of European brains using affine registration. Other configuration includes: bias, noise, and global intensity normalization. The standard preprocessing process output 3D image files with an uniform size of 121x145x121. Skull-stripping and normalization ensured the comparability between images by transforming the original brain image into a standard image space, so that same brain substructures can be aligned at same image coordinates for different participants. Diluted or enhanced intensity was used
to compensate the structure changes. the In our project, we used both whole brain (including both grey matter and white matter) and grey matter only.
#### 3. AlexNet and Transfer Learning
Convolutional Neural Networks (CNN) are very similar to ordinary Neural Networks. A CNN consists of an input and an output layer, as well as multiple hidden layers. The hidden layers are either convolutional, pooling or fully connected. ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduce the amount of parameters in the network.
#### 3.1. AlexNet
The net contains eight layers with weights; the first five are convolutional and the remaining three are fully connected. The overall architecture is shown in Figure 1. The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels. AlexNet maximizes the multinomial logistic regression objective, which is equivalent to maximizing the average across training cases of the log-probability of the correct label under the prediction distribution. The kernels of the second, fourth, and fifth convolutional layers are connected only to those kernel maps in the previous layer which reside on the same GPU (as shown in Figure1). The kernels of the third convolutional layer are connected to all kernel maps in the second layer. The neurons in the fully connected layers are connected to all neurons in the previous layer. Response-normalization layers follow the first and second convolutional layers. Max-pooling layers follow both response-normalization layers as well as the fifth convolutional layer. The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer. ![](images/f1.png)
The first convolutional layer filters the 224x224x3 input image with 96 kernels of size 11x11x3 with a stride of 4 pixels (this is the distance between the receptive field centers of neighboring neurons in a kernel map). The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5x5x48. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 3x3x256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 3x3x192 , and the fifth convolutional layer has 256 kernels of size 3x3x192. The fully-connected layers have 4096 neurons each.
#### 3.2. Transfer Learning
Training an entire Convolutional Network from scratch (with random initialization) is impractical[14] because it is relatively rare to have a dataset of sufficient size. An alternative is to pretrain a Conv-Net on a very large dataset (e.g. ImageNet), and then use the ConvNet either as an initialization or a fixed feature extractor for the task of interest. Typically, there are three major transfer learning scenarios:
**ConvNet as fixed feature extractor:** We can take a ConvNet pretrained on ImageNet, and remove the last fully-connected layer, then treat the rest structure as a fixed feature extractor for the target dataset. In AlexNet, this would be a 4096-D vector. Usually, we call these features as CNN codes. Once we get these features, we can train a linear classifier (e.g. linear SVM or Softmax classifier) for our target dataset.
**Fine-tuning the ConvNet:** Another idea is not only replace the last fully-connected layer in the classifier, but to also fine-tune the parameters of the pretrained network. Due to overfitting concerns, we can only fine-tune some higher-level part of the network. This suggestion is motivated by the observation that earlier features in a ConvNet contains more generic features (e.g. edge detectors or color blob detectors) that can be useful for many kind of tasks. But the later layer of the network becomes progressively more specific to the details of the classes contained in the original dataset.
**Pretrained models:** The released pretrained model is usually the final ConvNet checkpoint. So it is common to see people use the network for fine-tuning.
#### 4. 3D Autoencoder and Convolutional Neural Network
We take a two-stage approach where we first train a 3D sparse autoencoder to learn filters for convolution
operations, and then build a convolutional neural network whose first layer uses the filters learned with
the autoencoder. ![](images/f2.png)
#### 4.1. Sparse Autoencoder
An autoencoder is a 3-layer neural network that is used to extract features from an input such as an image. Sparse representations can provide a simple interpretation of the input data in terms of a small number of \parts by extracting the structure hidden in the data. The autoencoder has an input layer, a hidden layer and an output layer, and the input and output layers have same number of units, while the hidden layer contains more units for a sparse and overcomplete representation. The encoder function maps input x to representation h, and the decoder function maps the repr
没有合适的资源?快使用搜索试试~ 我知道了~
颜色分类leetcode-AD_Prediction:使用ResNet、AlexNet预测阿尔茨海默病
共45个文件
py:27个
txt:7个
png:7个
需积分: 39 6 下载量 143 浏览量
2021-07-06
22:25:54
上传
评论 1
收藏 5.64MB ZIP 举报
温馨提示
颜色分类leetcode AD-预测 使用脑 MRI 图像预测阿尔茨海默病的卷积神经网络 抽象的 阿尔茨海默病 (AD) 的特点是严重的记忆丧失和认知障碍。 它与显着的大脑结构变化有关,这可以通过磁共振成像 (MRI) 扫描来测量。 可观察到的临床前结构变化为使用图像分类工具(如卷积神经网络 (CNN))进行 AD 早期检测提供了机会。 然而,目前大多数 AD 相关研究都受到样本量的限制。 找到一种在有限数据上训练图像分类器的有效方法至关重要。 在我们的项目中,我们探索了基于 CNN 的不同转移学习方法,用于 AD 预测脑结构 MRI 图像。 我们发现,与从头开始训练的深度 CNN 相比,具有 2D 表示方法的预训练 2D AlexNet 和具有预训练 3D 自动编码器的简单神经网络都提高了预测性能。 预训练的 2D AlexNet 的表现甚至优于带有自动编码器的 3D CNN( 77% )( 86 % )。 方法 1. 数据 在这个项目中,我们使用了来自阿尔茨海默病神经影像学倡议 (ADNI)研究的公共大脑 MRI 数据。 ADNI 是一项正在进行的多中心队列研究,从 2004 年开
资源详情
资源评论
资源推荐
收起资源包目录
AD_Prediction-master.zip (45个子文件)
AD_Prediction-master
test_2classes.txt 2KB
AlexNet3D.py 1KB
AD_2DTestingSlicesData.py 5KB
.DS_Store 8KB
cnn_3d_wtih_ae.py 848B
AD_2DSlicesData.py 6KB
AD_Standard_2DTestingSlices.py 4KB
ResNet2D.py 6KB
imageExtract.py 252B
test_encoder.py 3KB
test.txt 4KB
train.txt 9KB
test.py 3KB
AD_3DRandomPatch.py 3KB
images
f2.png 677KB
f1.png 399KB
f6.png 178KB
f4.png 269KB
t1.png 72KB
f3.png 2.31MB
f5.png 305KB
validation_2C_new.txt 2KB
main_autoencoder.py 4KB
AD_Standard_CNN_Dataset.py 2KB
main_alexnet.py 9KB
AlexNet2D.py 2KB
autoencoder.py 455B
test_2C_new.txt 1KB
CV final report.pdf 1.59MB
main_cnn_autoencoder.py 9KB
README.md 19KB
3d_cnn_wtih_ae.py 829B
AD_2DRandomSlicesData.py 5KB
AD_Standard_2DSlicesData.py 3KB
AD_Standard_3DRandomPatch.py 3KB
custom_transform2D.py 1KB
ResNet3D.py 3KB
custom_transform.py 1KB
.gitignore 17B
train_2classes.txt 6KB
cnn_3d_with_ae.py 1KB
AD_Dataset.py 1KB
AD_Standard_2DRandomSlicesData.py 3KB
main_resnet.py 10KB
train_2C_new.txt 10KB
共 45 条
- 1
weixin_38678255
- 粉丝: 5
- 资源: 932
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0