# Swin-Unet
The codes for the work "Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation"(https://arxiv.org/abs/2105.05537). A validation for U-shaped Swin Transformer. Our paper has been accepted by ECCV 2022 MEDICAL COMPUTER VISION WORKSHOP (https://mcv-workshop.github.io/). We updated the Reproducibility. I hope this will help you to reproduce the results.
## 1. Download pre-trained swin transformer model (Swin-T)
* [Get pre-trained model in this link] (https://drive.google.com/drive/folders/1UC3XOoezeum0uck4KBVGa8osahs6rKUY?usp=sharing): Put pretrained Swin-T into folder "pretrained_ckpt/"
## 2. Prepare data
- The datasets we used are provided by TransUnet's authors. [Get processed data in this link] (https://drive.google.com/drive/folders/1ACJEoTp-uqfFJ73qS3eUObQh52nGuzCd). Please go to ["./datasets/README.md"](datasets/README.md) for details, or please send an Email to jienengchen01 AT gmail.com to request the preprocessed data. If you would like to use the preprocessed data, please use it for research purposes and do not redistribute it (following the TransUnet's License).
## 3. Environment
- Please prepare an environment with python=3.7, and then use the command "pip install -r requirements.txt" for the dependencies.
## 4. Train/Test
- Run the train script on synapse dataset. The batch size we used is 24. If you do not have enough GPU memory, the bacth size can be reduced to 12 or 6 to save memory.
- Train
```bash
sh train.sh or python train.py --dataset Synapse --cfg configs/swin_tiny_patch4_window7_224_lite.yaml --root_path your DATA_DIR --max_epochs 150 --output_dir your OUT_DIR --img_size 224 --base_lr 0.05 --batch_size 24
```
- Test
```bash
sh test.sh or python test.py --dataset Synapse --cfg configs/swin_tiny_patch4_window7_224_lite.yaml --is_saveni --volume_path your DATA_DIR --output_dir your OUT_DIR --max_epoch 150 --base_lr 0.05 --img_size 224 --batch_size 24
```
## Reproducibility
- Questions about Dataset
Many of you have asked me for datasets, and I personally would be very glad to share the preprocessed Synapse and ACDC datasets with you. However, I am not the owner of these two preprocessed datasets. Please email jienengchen01 AT gmail.com to get the processed datasets.
- Codes
Our trained model is stored on the Huawei cloud. The interns do not have the right to send any files out from the internal system, so I can't share our trained model weights. Regarding how to reproduce the segmentation results presented in the paper, we discovered that different GPU types would generate different results. In our code, we carefully set the random seed, so the results should be consistent when trained multiple times on the same type of GPU. If the training does not give the same segmentation results as in the paper, it is recommended to adjust the learning rate. And, the type of GPU we used in this work is Tesla v100. Finaly, pre-training is very important for pure transformer models. In our experiments, both the encoder and decoder are initialized with pretrained weights rather than initializing the encoder with pretrained weights only.
## References
* [TransUnet](https://github.com/Beckschen/TransUNet)
* [SwinTransformer](https://github.com/microsoft/Swin-Transformer)
## Citation
```bibtex
@InProceedings{swinunet,
author = {Hu Cao and Yueyue Wang and Joy Chen and Dongsheng Jiang and Xiaopeng Zhang and Qi Tian and Manning Wang},
title = {Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation},
booktitle = {Proceedings of the European Conference on Computer Vision Workshops(ECCVW)},
year = {2022}
}
@misc{cao2021swinunet,
title={Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation},
author={Hu Cao and Yueyue Wang and Joy Chen and Dongsheng Jiang and Xiaopeng Zhang and Qi Tian and Manning Wang},
year={2021},
eprint={2105.05537},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
没有合适的资源?快使用搜索试试~ 我知道了~
Swin-Unet多分类
共35个文件
py:9个
pyc:6个
xml:5个
需积分: 5 23 下载量 171 浏览量
2023-08-04
01:15:26
上传
评论 2
收藏 192.91MB ZIP 举报
温馨提示
1.使用了CamVid数据集,加背景共12类; 2.数据结构是train,val,test三个文件夹里分别放images,labels文件夹,且文件夹里的图像和标签名字要一样; 3.没有大量测试,只跑了20个epoch,还看不出效果,需要自己调试
资源推荐
资源详情
资源评论
收起资源包目录
Swin-Unet.zip (35个子文件)
Swin-Unet
utils.py 4KB
networks
vision_transformer.py 4KB
swin_transformer_unet_skip_expand_decoder_sys.py 31KB
__pycache__
swin_transformer_unet_skip_expand_decoder_sys.cpython-37.pyc 25KB
vision_transformer.cpython-37.pyc 3KB
pretrained_ckpt
swin_tiny_patch4_window7_224.pth 109.05MB
weights
epoch_19.pth 105.53MB
log.txt 656B
log
events.out.tfevents.1690998996.saners 5.22MB
configs
swin_tiny_patch4_window7_224_lite.yaml 321B
trainer.py 5KB
lists
lists_Synapse
test_vol.txt 108B
all.lst 480B
train.txt 39KB
.idea
Swin-Unet.iml 485B
workspace.xml 4KB
misc.xml 188B
inspectionProfiles
Project_Default.xml 9KB
profiles_settings.xml 174B
modules.xml 277B
.gitignore 50B
datasets
dataset_synapse.py 9KB
__pycache__
dataset_synapse.cpython-37.pyc 7KB
README.md 1KB
inference.py 9KB
test.sh 827B
requirements.txt 108B
train.sh 815B
train.py 4KB
__pycache__
utils.cpython-37.pyc 3KB
config.cpython-37.pyc 3KB
trainer.cpython-37.pyc 3KB
test.py 7KB
README.md 4KB
config.py 7KB
共 35 条
- 1
资源评论
如雾如电
- 粉丝: 1w+
- 资源: 17
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功