# SAM-Med2D \[[Paper](https://arxiv.org/abs/2308.16184)]
[![Open in OpenXLab](https://cdn-static.openxlab.org.cn/app-center/openxlab_app.svg)](https://openxlab.org.cn/apps/detail/GMAI/SAM-Med2D)
</a>
<a src="https://img.shields.io/badge/Data-SAMed2D_20M-blue?logo=red" href="https://openxlab.org.cn/datasets/GMAI/SA-Med2D-20M"> <img src="https://img.shields.io/badge/Data-SAMed2D_20M-blue?logo=red">
<a src="https://img.shields.io/badge/cs.CV-2308.16184-b31b1b?logo=arxiv&logoColor=red" href="https://arxiv.org/abs/2308.16184"> <img src="https://img.shields.io/badge/cs.CV-2308.16184-b31b1b?logo=arxiv&logoColor=red">
<a src="https://img.shields.io/badge/WeChat-Group-green?logo=wechat" href="https://github.com/OpenGVLab/SAM-Med2D/blob/main/assets/SAM-Med2D_wechat_group.jpeg"> <img src="https://img.shields.io/badge/WeChat-Group-green?logo=wechat">
</a>
<a target="_blank" href="https://colab.research.google.com/github/OpenGVLab/SAM-Med2D/blob/main/predictor_example.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
[![GitHub Stars](https://img.shields.io/github/stars/OpenGVLab/SAM-Med2D.svg?style=social&label=Star&maxAge=60)](https://github.com/OpenGVLab/SAM-Med2D)ð¥ð¥ð¥
<!-- ## Description -->
## ð¤ï¸ Highlights
- ð Collected and curated the largest medical image segmentation dataset (4.6M images and 19.7M masks) to date for training models.
- ð The most comprehensive fine-tuning based on Segment Anything Model (SAM).
- ð Comprehensive evaluation of SAM-Med2D on large-scale datasets.
## ð¥ Updates
- (2023.12.05) We open the download of the dataset on the [Hugging Face](https://huggingface.co/datasets/OpenGVLab/SA-Med2D-20M) platform
- (2023.11.23) We have released the [SA-Med2D-20M](https://openxlab.org.cn/datasets/GMAI/SA-Med2D-20M) dataset
- (2023.11.21) We have released article introducing the [SA-Med2D-20M](https://arxiv.org/abs/2311.11969) dataset
- (2023.10.24) We now released [SAM-Med3D](https://github.com/uni-medical/SAM-Med3D), which focus on segmentation of 3D medical imaging
- (2023.09.14) Train code release
- (2023.09.02) Test code release
- (2023.08.31) Pre-trained model release
- (2023.08.31) Paper release
- (2023.08.26) Online Demo release
## ð Dataset
SAM-Med2D is trained and tested on a dataset that includes **4.6M images** and **19.7M masks**. This dataset covers 10 medical data modalities, 4 anatomical structures + lesions, and 31 major human organs. To our knowledge, this is currently the largest and most diverse medical image segmentation dataset in terms of quantity and coverage of categories.
<p align="center"><img width="800" alt="image" src="https://github.com/OpenGVLab/SAM-Med2D/blob/main/assets/dataset.png"></p>
## ð Framework
The pipeline of SAM-Med2D. We freeze the image encoder and incorporate learnable adapter layers in each Transformer block to acquire domain-specific knowledge in the medical field. We fine-tune the prompt encoder using point, Bbox, and mask information, while updating the parameters of the mask decoder through interactive training.
<p align="center"><img width="800" alt="image" src="https://github.com/OpenGVLab/SAM-Med2D/blob/main/assets/framwork.png"></p>
## ð Results
<table>
<caption align="center">Quantitative comparison of different methods on the test set: </caption>
<thead>
<tr>
<th>Model</th>
<th>Resolution</th>
<th>Bbox (%)</th>
<th>1 pt (%)</th>
<th>3 pts (%)</th>
<th>5 pts (%)</th>
<th>FPS</th>
<th>Checkpoint</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">SAM</td>
<td align="center">$256\times256$</td>
<td align="center">61.63</td>
<td align="center">18.94</td>
<td align="center">28.28</td>
<td align="center">37.47</td>
<td align="center">51</td>
<td align="center"><a href="https://drive.google.com/file/d/1_U26MIJhWnWVwmI5JkGg2cd2J6MvkqU-/view?usp=drive_link">Offical</a></td>
</tr>
<tr>
<td align="center">SAM</td>
<td align="center">$1024\times1024$</td>
<td align="center">74.49</td>
<td align="center">36.88</td>
<td align="center">42.00</td>
<td align="center">47.57</td>
<td align="center">8</td>
<td align="center"><a href="https://drive.google.com/file/d/1_U26MIJhWnWVwmI5JkGg2cd2J6MvkqU-/view?usp=drive_link">Offical</a></td>
</tr>
<tr>
<td align="center">FT-SAM</td>
<td align="center">$256\times256$</td>
<td align="center">73.56</td>
<td align="center">60.11</td>
<td align="center">70.95</td>
<td align="center">75.51</td>
<td align="center">51</td>
<td align="center"><a href="https://drive.google.com/file/d/1J4qQt9MZZYdv1eoxMTJ4FL8Fz65iUFM8/view?usp=drive_link">FT-SAM</a></td>
</tr>
<tr>
<td align="center">SAM-Med2D</td>
<td align="center">$256\times256$</td>
<td align="center">79.30</td>
<td align="center">70.01</td>
<td align="center">76.35</td>
<td align="center">78.68</td>
<td align="center">35</td>
<td align="center"><a href="https://drive.google.com/file/d/1ARiB5RkSsWmAB_8mqWnwDF8ZKTtFwsjl/view?usp=drive_link">SAM-Med2D</a></td>
</tr>
</tbody>
</table>
ç¾åº¦äºé¾æ¥: https://pan.baidu.com/s/1HWo_s8O7r4iQI6irMYU8vQ?pwd=dk5x
æåç : dk5x
<table>
<caption align="center">Generalization validation on 9 MICCAI2023 datasets, where "*" denotes that we drop adapter layer of SAM-Med2D in test phase: </caption>
<thead>
<tr>
<th rowspan="2">Datasets</th>
<th colspan="3">Bbox prompt (%)</th>
<th colspan="3">1 point prompt (%)</th>
</tr>
<tr>
<th>SAM</th>
<th>SAM-Med2D*</th>
<th>SAM-Med2D</th>
<th>SAM</th>
<th>SAM-Med2D*</th>
<th>SAM-Med2D</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center"><a href="https://www.synapse.org/#!Synapse:syn51236108/wiki/621615">CrossMoDA23</a></td>
<td align="center">78.12</td>
<td align="center">86.26</td>
<td align="center">88.42</td>
<td align="center">33.84</td>
<td align="center">65.85</td>
<td align="center">85.26</td>
</tr>
<tr>
<td align="center"><a href="https://kits-challenge.org/kits23/">KiTS23</a></td>
<td align="center">81.52</td>
<td align="center">86.14</td>
<td align="center">89.89</td>
<td align="center">31.36</td>
<td align="center">56.67</td>
<td align="center">83.71</td>
</tr>
<tr>
<td align="center"><a href="https://codalab.lisn.upsaclay.fr/competitions/12239#learn_the_details">FLARE23</a></td>
<td align="center">73.20</td>
<td align="center">77.18</td>
<td align="center">85.09</td>
<td align="center">19.87</td>
<td align="center">32.01</td>
<td align="center">77.17</td>
</tr>
<tr>
<td align="center"><a href="https://atlas-challenge.u-bourgogne.fr/">ATLAS2023</a></td>
<td align="center">76.98</td>
<td align="center">79.09</td>
<td align="center">82.59</td>
<td align="center">29.07</td>
<td align="center">45.25</td>
<td align="center">64.76</td>
</tr>
<tr>
<td align="center"><a href="https://multicenteraorta.grand-challenge.org/">SEG2023</a></td>
<td align="center">64.82</td>
<td align="center">81.85</td>
<td align="center">85.09</td>
<td align="center">21.15</td>
<td align="center">34.71</td>
<td align="center">72.08</td>
</tr>
<tr>
<td align="center"><a href="https://lnq2023.grand-challenge.org/lnq2023/">LNQ2023</a></td>
<td align="center">53.02</td>
<td align="center">57.37</td>
<td align="center">58.01</td>
<td align="center">7.05</td>
<td align="center">7.21</td>
<td align="center">37.64</td>
</tr>
<tr>
<td align="center"><a href="https://codalab.lisn.upsaclay.fr/competitions/9
没有合适的资源?快使用搜索试试~ 我知道了~
SAM-Med 2D视觉大模型复现、训练自定义数据集:脊椎分割
共2000个文件
png:1949个
py:27个
pyc:14个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 34 浏览量
2024-08-08
12:19:07
上传
评论 1
收藏 243.76MB ZIP 举报
温馨提示
SAM-Med 2D视觉大模型复现、训练自定义数据集:脊椎分割 数据集在RawData下,运行process脚本可以直接生成可供SAM-Med 2D模型训练的数据data_demo。无需更改脚本,直接运行train脚本即可 doc文件可以看到脊椎sam模型的分割和预测结果,以及训练数据的详细方法
资源推荐
资源详情
资源评论
收起资源包目录
SAM-Med 2D视觉大模型复现、训练自定义数据集:脊椎分割 (2000个子文件)
阅读我.docx 574KB
app.ipynb 10KB
SAM-Med2D_wechat_group.jpeg 248KB
SAM-Med2D_wechat_group.jpg 248KB
image2label_train.json 274KB
label2image_test.json 171KB
LICENSE 11KB
README.md 15KB
README.md 1B
result.png 2.43MB
framwork.png 2MB
dataset.png 1.85MB
visualization.png 1.05MB
SAM_Med2D_wechat_group.png 562KB
cover_SA-Med2D-20M.png 167KB
volume-covid19-A-0316_ct_335_20.png 2KB
volume-covid19-A-0215_ct_312_19.png 2KB
volume-covid19-A-0215_ct_311_19.png 2KB
volume-covid19-A-0282_ct_328_20.png 2KB
volume-covid19-A-0147_ct_334_20.png 2KB
volume-covid19-A-0215_ct_310_19.png 2KB
volume-covid19-A-0090_ct_346_20.png 2KB
volume-covid19-A-0147_ct_333_20.png 2KB
volume-covid19-A-0120_ct_328_20.png 1KB
volume-covid19-A-0319_ct_348_9.png 1KB
volume-covid19-A-0313_ct_328_20.png 1KB
volume-covid19-A-0313_ct_329_20.png 1KB
volume-covid19-A-0319_ct_347_9.png 1KB
volume-covid19-A-0282_ct_331_20.png 1KB
volume-covid19-A-0090_ct_344_19.png 1KB
volume-covid19-A-0319_ct_344_9.png 1KB
volume-covid19-A-0073_ct_358_20.png 1KB
volume-covid19-A-0187_ct_316_9.png 1KB
volume-covid19-A-0090_ct_346_19.png 1KB
volume-covid19-A-0382_ct_347_19.png 1KB
volume-covid19-A-0215_ct_318_19.png 1KB
volume-covid19-A-0090_ct_344_20.png 1KB
volume-covid19-A-0215_ct_324_18.png 1KB
volume-covid19-A-0316_ct_330_20.png 1KB
volume-covid19-A-0319_ct_346_9.png 1KB
volume-covid19-A-0090_ct_343_20.png 1KB
volume-covid19-A-0360_ct_363_20.png 1KB
volume-covid19-A-0319_ct_366_17.png 1KB
volume-covid19-A-0215_ct_322_19.png 1KB
volume-covid19-A-0313_ct_334_19.png 1KB
volume-covid19-A-0120_ct_326_20.png 1KB
volume-covid19-A-0252_ct_305_8.png 1KB
volume-covid19-A-0360_ct_331_9.png 1KB
volume-covid19-A-0090_ct_347_19.png 1KB
volume-covid19-A-0361_ct_323_12.png 1KB
volume-covid19-A-0215_ct_324_19.png 1KB
volume-covid19-A-0382_ct_344_19.png 1KB
volume-covid19-A-0252_ct_307_18.png 1KB
volume-covid19-A-0316_ct_332_21.png 1KB
volume-covid19-A-0073_ct_362_20.png 1KB
volume-covid19-A-0282_ct_308_9.png 1KB
volume-covid19-A-0073_ct_363_20.png 1KB
volume-covid19-A-0252_ct_324_18.png 1KB
volume-covid19-A-0147_ct_291_9.png 1KB
volume-covid19-A-0215_ct_320_19.png 1KB
volume-covid19-A-0073_ct_357_20.png 1KB
volume-covid19-A-0090_ct_315_9.png 1KB
volume-covid19-A-0313_ct_335_19.png 1KB
volume-covid19-A-0380_ct_365_19.png 1KB
volume-covid19-A-0252_ct_307_9.png 1KB
volume-covid19-A-0382_ct_349_19.png 1KB
volume-covid19-A-0003_ct_316_8.png 1KB
volume-covid19-A-0316_ct_311_21.png 1KB
volume-covid19-A-0382_ct_345_12.png 1KB
volume-covid19-A-0147_ct_334_19.png 1KB
volume-covid19-A-0090_ct_327_19.png 1KB
volume-covid19-A-0313_ct_331_16.png 1KB
volume-covid19-A-0106_ct_300_8.png 1KB
volume-covid19-A-0073_ct_356_20.png 1KB
volume-covid19-A-0382_ct_345_17.png 1KB
volume-covid19-A-0319_ct_369_18.png 1KB
volume-covid19-A-0003_ct_325_9.png 1KB
volume-covid19-A-0003_ct_356_18.png 1KB
volume-covid19-A-0237_ct_323_10.png 1KB
volume-covid19-A-0316_ct_302_9.png 1KB
volume-covid19-A-0070_ct_313_20.png 1KB
volume-covid19-A-0003_ct_329_9.png 1KB
volume-covid19-A-0252_ct_327_18.png 1KB
volume-covid19-A-0382_ct_344_20.png 1KB
volume-covid19-A-0301_ct_320_9.png 1KB
volume-covid19-A-0316_ct_303_9.png 1KB
volume-covid19-A-0096_ct_285_8.png 1KB
volume-covid19-A-0090_ct_348_19.png 1KB
volume-covid19-A-0316_ct_335_19.png 1KB
volume-covid19-A-0090_ct_312_9.png 1KB
volume-covid19-A-0003_ct_362_18.png 1KB
volume-covid19-A-0003_ct_363_17.png 1KB
volume-covid19-A-0316_ct_304_9.png 1KB
volume-covid19-A-0147_ct_335_19.png 1KB
volume-covid19-A-0301_ct_319_9.png 1KB
volume-covid19-A-0319_ct_370_18.png 1KB
volume-covid19-A-0003_ct_355_18.png 1KB
volume-covid19-A-0120_ct_325_20.png 1KB
volume-covid19-A-0319_ct_367_12.png 1KB
volume-covid19-A-0313_ct_331_19.png 1KB
共 2000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 20
资源评论
听风吹等浪起
- 粉丝: 1w+
- 资源: 1949
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功