# Echo from noise: synthetic ultrasound image generation using diffusion models for real image segmentation · [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) [![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg?style=flat-square)](https://github.com/your/your-project/blob/master/LICENSE)
<img src='./README_assets/pipeline.png'>
## Cite this work [Springer]
```
@InProceedings{10.1007/978-3-031-44521-7_4,
author="Stojanovski, David
and Hermida, Uxio
and Lamata, Pablo
and Beqiri, Arian
and Gomez, Alberto",
editor="Kainz, Bernhard
and Noble, Alison
and Schnabel, Julia
and Khanal, Bishesh
and M{\"u}ller, Johanna Paula
and Day, Thomas",
title="Echo from Noise: Synthetic Ultrasound Image Generation Using Diffusion Models for Real Image Segmentation",
booktitle="Simplifying Medical Ultrasound",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
pages="34--43",
abstract="We propose a novel pipeline for the generation of synthetic ultrasound images via Denoising Diffusion Probabilistic Models (DDPMs) guided by cardiac semantic label maps. We show that these synthetic images can serve as a viable substitute for real data in the training of deep-learning models for ultrasound image analysis tasks such as cardiac segmentation. To demonstrate the effectiveness of this approach, we generated synthetic 2D echocardiograms and trained a neural network for segmenting the left ventricle and left atrium. The performance of the network trained on exclusively synthetic images was evaluated on an unseen dataset of real images and yielded mean Dice scores of {\$}{\$}88.6 {\backslash}pm 4.91{\$}{\$}88.6{\textpm}4.91, {\$}{\$}91.9 {\backslash}pm 4.22{\$}{\$}91.9{\textpm}4.22, {\$}{\$}85.2 {\backslash}pm 4.83{\$}{\$}85.2{\textpm}4.83{\%} for left ventricular endocardium, epicardium and left atrial segmentation respectively. This represents a relative increase of 9.2, 3.3 and 13.9{\%} in Dice scores compared to the previous state-of-the-art. The proposed pipeline has potential for application to a wide range of other tasks across various medical imaging modalities.",
isbn="978-3-031-44521-7"
}
```
## Papers
### [Echo from noise: synthetic ultrasound image generation using diffusion models for real image segmentation Paper](https://link.springer.com/chapter/10.1007/978-3-031-44521-7_4)
[David Stojanovski](https://scholar.google.com/citations?user=6A_chPAAAAAJ&hl=en), [Uxio Hermida](https://scholar.google.com/citations?hl=en&user=6DkZyrXMyKEC), [Pablo Lamata](https://scholar.google.com/citations?hl=en&user=H98n1tsAAAAJ), [Arian Beqiri](https://scholar.google.com/citations?hl=en&user=osD0r24AAAAJ&view_op=list_works&sortby=pubdate), [Alberto Gomez](https://scholar.google.com/citations?hl=en&user=T4fP_swAAAAJ&view_op=list_works&sortby=pubdate)
### [Semantic Diffusion Model Paper](https://arxiv.org/abs/2207.00050)
[Weilun Wang](https://scholar.google.com/citations?hl=zh-CN&user=YfV4aCQAAAAJ), [Jianmin Bao](https://scholar.google.com/citations?hl=zh-CN&user=hjwvkYUAAAAJ), [Wengang Zhou](https://scholar.google.com/citations?hl=zh-CN&user=8s1JF8YAAAAJ), [Dongdong Chen](https://scholar.google.com/citations?hl=zh-CN&user=sYKpKqEAAAAJ), [Dong Chen](https://scholar.google.com/citations?hl=zh-CN&user=_fKSYOwAAAAJ), [Lu Yuan](https://scholar.google.com/citations?hl=zh-CN&user=k9TsUVsAAAAJ), [Houqiang Li](https://scholar.google.com/citations?hl=zh-CN&user=7sFMIKoAAAAJ),
## Abstract
We propose a novel pipeline for the generation of synthetic images via Denoising Diffusion Probabilistic Models (DDPMs)
guided by cardiac ultrasound semantic label maps. We show that these synthetic images can serve as a viable substitute
for real data in the training of deep-learning models for medical image analysis tasks such as image segmentation. To
demonstrate the effectiveness of this approach, we generated synthetic 2D echocardiography images and trained a neural
network for segmentation of the left ventricle and left atrium. The performance of the network trained on exclusively
synthetic images was evaluated on an unseen dataset of real images and yielded mean Dice scores of 88.5 $\pm 6.0$ , 92.3
$\pm 3.9$, 86.3 $\pm 10.7$ \% for left ventricular endocardial, epicardial and left atrial segmentation respectively.
This represents an increase of $9.09$, $3.7$ and $15.0$ \% in Dice scores compared to the previous state-of-the-art. The
proposed pipeline has the potential for application to a wide range of other tasks across various medical imaging
modalities.
## Example Results
<img src='./README_assets/SDM_example_views.png'>
<img src='./README_assets/transforms.png'>
## Prerequisites
- Linux
- Python 3
- CPU or NVIDIA GPU + CUDA CuDNN
## Dataset Preparation
The data used and generated for the paper can be found as follows:
1) The CAMUS data used for training and testing can be
found [here](https://humanheart-project.creatis.insa-lyon.fr/database/#collection/6373703d73e9f0047faa1bc8/folder/6373727d73e9f0047faa1bca).
2) The generated synthetic data and pretrained models can be
found [here](https://zenodo.org/record/7921055#.ZFyqd9LMLmE)
- A script to extract the CAMUS data into the required format can be found
in `./data_preparation/extract_camus_data.py`. All that needs to be edited is the `camus_data_folder`
and `save_folder_path` variables.
- To then augment the extracted CAMUS data the script `./data_preparation/augment_camus_labels.py` can be used. Again,
the `data_folder` and `save_folder` variables need to be edited.
- After the network has been trained and inferenced (As explained below) the inferenced images can then be placed in the
correct folder format for the segmentation task using the `./data_preparation/prepare_inference4segmentation.py`
script. The `testing_data_folder` is the path to the original CAMUS data and the `sdm_results_folder` is the path to
the inferenced SDM data. `save_folder` is the path to save the prepared data.
# Semantic Diffusion Model
The default parameters for training and inference can be found in the `./semantic_diffusion_model/config.py` file.
The original network our code is developed on can be
found [here](https://github.com/WeilunWang/semantic-diffusion-model). This also contains a number of scripts with
variations on parameters for both training and inference.
## SDM training
To train the SDM model run:
```bash
mpiexec -np 8 python3 ./image_train.py --datadir ./data/view_folder --savedir ./output --batch_size_train 12 \
--is_train True --save_interval 50000 --lr_anneal_steps 50000 --random_flip True --deterministic_train False \
--img_size 256
```
## SDM inference
To inference the SDM model run:
```bash
mpiexec -np 8 python3 ./image_sample.py --datadir ./data/view_folder \
--resume_checkpoint ./path/to/ema_checkpoint.pt --results_dir ./results_2CH_ED --num_samples 2250 \
--is_train False --inference_on_train True
```
# Segmentation Network
## Training Segmentation Network
The main script to run a training of the segmentation network is `./echo_segmentation/runner.py`. An example of how to
run this script is as follows:
```bash
python ./echo_segmentations/runner.py --data-dir /path/to/data/%s/
```
The default parameters of the argparse are those which were used to train the network, and are found
within `./echo_segmentation/runner.py`
## Testing Segmentation Network
The main script to run a training of the segmentation network is `./echo_segmentation/test_model.py`. An example of how
to run this script is as follows:
```bash
python ./echo_segmentations/test_model.py --data-dir /path/to/data/%s/ --model-path /path/to/model
```
The default parameters of the argparse are those which were used to test the network, and are found
within `./echo_segmentation/test_model.py`
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
本项目旨在利用深度扩散模型生成合成超声图像,并实现对真实超声图像的分割。超声成像是一种重要的医学影像技术,然而其成像质量受限于设备性能和操作技术。 我们采用深度学习中的深度扩散模型,通过学习真实超声图像的分布,生成具有高真实感的合成图像。这些合成图像可用于数据增强、模型训练和评估等。项目使用的数据集包括公开的超声图像数据集,如U-Net、U-Net++等,并进行了预处理,如图像裁剪、大小调整和归一化等。 在运行环境方面,我们使用Python编程语言,基于TensorFlow和PyTorch深度学习框架进行开发。为了提高计算效率,我们还使用了GPU加速计算。此外,我们还采用了Docker容器技术,确保实验结果的可重复性。 项目完成后,将实现对真实超声图像的快速、准确分割,为超声成像技术的发展提供有力支持。同时,项目成果也可应用于其他医学图像分析领域。
资源推荐
资源详情
资源评论
收起资源包目录
基于深度扩散模型生成合成超声图像实现真实图像分割内含数据集.zip (38个子文件)
echo_segmentations
utils
epoch_cycle.py 5KB
test_visualize.py 1KB
network_utils.py 726B
dataset_analyzer.py 2KB
data_transforms.py 3KB
datasets.py 2KB
test_model.py 5KB
runner.py 11KB
README_assets
SDM_example_views.png 2.41MB
pipeline.png 1.54MB
transforms.png 1.88MB
requirements.txt 306B
data_preparation
extract_camus_data.py 4KB
augment_camus_labels.py 5KB
prepare_inference4segmentation.py 2KB
README.md 8KB
semantic_diffusion_model
image_sample.py 21KB
guided_diffusion
__init__.py 74B
losses.py 2KB
dist_util.py 2KB
fp16_util.py 8KB
nn.py 5KB
train_util.py 12KB
gaussian_diffusion.py 35KB
resample.py 6KB
image_datasets.py 12KB
unet.py 28KB
logger.py 14KB
script_util.py 5KB
respace.py 5KB
evaluations
__init__.py 0B
lpips.py 5KB
fid_folder
__init__.py 0B
inception.py 11KB
tests_with_FID.py 10KB
default_dataset.py 2KB
image_train.py 16KB
config.py 2KB
共 38 条
- 1
资源评论
小码蚁.
- 粉丝: 2534
- 资源: 4146
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- Screenshot_20240528_103010.jpg
- 基于Python的新能源承载力计算及界面设计源码 - HAINING-DG
- 基于Java的本科探索学习项目设计源码 - 本科探索
- 基于Javascript和Python的微商城项目设计源码 - MicroMall
- 基于Java的网上订餐系统设计源码 - online ordering system
- 基于Javascript的超级美眉网络资源管理应用模块设计源码
- 基于Typescript和PHP的编程知识储备库设计源码 - study-php
- Screenshot_2024-05-28-11-40-58-177_com.tencent.mm.jpg
- 基于Dart的Flutter小提琴调音器APP设计源码 - violinhelper
- 基于JavaScript和CSS的随寻订购网页设计源码 - web-order
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功