# 基于深度学习的肺结节检测算法
## Features
- **3D** Segmentation & Classification with Keras
- Fine **preprocessing** with scikit-image
- Fine **visualization** for clarification
- Modified UNet for **segmentation**
- Modified VGG/Inception/ResNet/DenseNet for **classification ensemble**
- Fine **hyperparameter** tunning with both models and training process.
## Code Hierarchy
```
- config.py # good practice to centralize hyper parameters
- preprocess.py # Step 1, preprocess, store numpy/meta 'cache' at ./preprocess/
- train_segmentation.py # Step 2, segmentation with UNet Model
- model_UNet.py # UNet model definition
- train_classificaion.py # Step 3, classificaiton with VGG/Inception/ResNet/DenseNet
- model_VGG.py # VGG model definition
- model_Inception.py # Inception model definition
- model_ResNet.py # ResNet model definition
- model_DenseNet.py # DenseNet model definition
- generators.py # generator for segmentation & classificaiton models
- visual_utils.py # 3D visual tools
- dataset/ # dataset, changed in config.py
- preprocess/ # 'cache' preprocessed numpy/meta data, changed in config.py
- train_ipynbs # training process notebooks
```
## Preprocess
- use `SimpleITK` to read CT files, process, and store into cache with numpy arrays
- process with `scikit-image` lib, try lots of parameters for best cutting
- binarized
- clear-board
- label
- regions
- closing
- dilation
- collect all meta information(seriesuid, shape, file_path, origin, spacing, coordinates, cover_ratio, etc.) and store in **ONE** cache file for fast training init.
- see preprocessing in `/train_ipynbs/preprocess.ipynb` file
Distribution of the lung part takes on a whole CT.
<img src='./assets/preprocess-cover-ratio.png'>
Tumor size distribution
<img src='./assets/preprocess-diameter-mm.png'>
## Segmentation
- A **simplified and full UNet** both tested.
- **`dice_coef_loss`** as loss function.
- Periodically evaluate model with **lots of metrics**, which helps a lot to understand the model.
- 30% of negative sample, which has no tumor, for generalization.
- Due to memory limitation, 16 batch size used.
<img src='./assets/segmentation.png'>
## Classification
### VGG
- A simplified and full VGG model both tested. Use simplified VGG as baseline.
<img src='./assets/VGG.png'>
Pictures tells that: **hyperparameter tunning really matters**.
### Inception
- A simplified Inception-module based network, with each block has 4-5 different type of conv.
- 1\*1\*1 **depth-size seperable conv**
- 1\*1\*1 **depth-size seperable conv**, then 3\*3\*3 conv_bn_relu
- 1\*1\*1 **depth-size seperable conv**, then 2 3\*3\*3 conv_bn_relu
- AveragePooling3D, then 1\*1\*1 **depth-size seperable conv**
- (optional in config) 1\*1\*1 **depth-size seperable conv**, and (5, 1, 1), (1, 5, 1), (1, 1, 5) **spatial separable convolution**
- Concatenate above.
<img src='./assets/Inception.png'>
### ResNet
- use `bottleneck` block instead of `basic_block` for implementation.
- A `bottleneck` **residual block** consists of:
- (1, 1, 1) conv_bn_relu
- (3, 3, 3) conv_bn_relu
- (1, 1, 1) conv_bn_relu
- (optional in config) kernel_size=(3, 3, 3), strides=(2, 2, 2) conv_bn_relu for compression.
- **Add(not Concatenate)** with input
- Leave `RESNET_BLOCKS` as config to tune
<img src='./assets/ResNet.png'>
### DenseNet
- `DenseNet` draws tons of experience from origin paper. [https://arxiv.org/abs/1608.06993](https://arxiv.org/abs/1608.06993)
- 3 dense\_block with 5 bn\_relu\_conv layers according to paper.
- transition\_block after every dense\_block, expcet the last one.
- Optional config for **DenseNet-BC**(paper called it): **1\*1\*1 depth-size seperable conv**, and **transition_block compression**.
<img src='./assets/DenseNet.png'>
## Fine Tunning & Experience Got
- Learning rate: `3e-5` works well for UNet, `1e-4` works well for classification models.
- Due to memory limitation, 16 batch size used.
- Data Augumentation: shift, rotate, etc.
- **Visualization cannot be more important!!!**
- coord(x, y, z) accord to (width, height, depth), naughty bugs.
- **Put all config in one file save tons of time. Make everything clean and tidy**
- Disk read is bottle neck. Read from **SSD**.
- Different runs has different running log dirs, for better TensorBoard visualization. Make it like **`/train_logs/<model-name>-run-<hour>-<minute>`**.
- Lots of **debug options** in config file.
- 4 times probability strengthened for tumors < 10mm, 3 for tumor > 10mm and < 30mm, keep for > 30mm. Give more focus on small tumors, like below.
<img src='./assets/small-tumor.png'>
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
基于深度学习的肺结节检测算法.zip (25个子文件)
基于深度学习的肺结节检测算法
train_ipynbs
train_DenseNet.ipynb 74KB
train-UNet.ipynb 4.71MB
preprocess.ipynb 2.72MB
train_VGG.ipynb 37KB
generators.py 9KB
assets
small-tumor.png 55KB
segmentation.png 126KB
preprocess-cover-ratio.png 10KB
VGG.png 202KB
DenseNet.png 102KB
preprocess-diameter-mm.png 12KB
ResNet.png 16KB
Inception.png 32KB
model_DenseNet.py 2KB
train_classification.py 2KB
model_Inception.py 4KB
preprocess.py 4KB
requirements.txt 98B
model_ResNet.py 2KB
visual_utils.py 2KB
config.py 3KB
README.md 5KB
model_UNet.py 7KB
model_VGG.py 4KB
train_segmentation.py 1KB
共 25 条
- 1
资源评论
- 云梦莲花坞----2024-01-27资源值得借鉴的内容很多,那就浅学一下吧,值得下载!
- m0_590235012024-03-29资源不错,很实用,内容全面,介绍详细,很好用,谢谢分享。
小码蚁.
- 粉丝: 2505
- 资源: 3755
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功