亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

MA-SAM: Modality-agnostic SAM adaptation for 3D medical image segmentation

分割 计算机科学 人工智能 编码器 计算机视觉 医学影像学 图像分割 模态(人机交互) 模式识别(心理学) 操作系统
作者
Cheng Chen,Juzheng Miao,Dufan Wu,Aoxiao Zhong,Zhiling Yan,Sekeun Kim,Jiang Hu,Zhengliang Liu,Lichao Sun,Xiang Li,Tianming Liu,Pheng‐Ann Heng,Quanzheng Li
出处
期刊:Medical Image Analysis [Elsevier]
卷期号:98: 103310-103310 被引量:65
标识
DOI:10.1016/j.media.2024.103310
摘要

The Segment Anything Model (SAM), a foundation model for general image segmentation, has demonstrated impressive zero-shot performance across numerous natural image segmentation tasks. However, SAM's performance significantly declines when applied to medical images, primarily due to the substantial disparity between natural and medical image domains. To effectively adapt SAM to medical images, it is important to incorporate critical third-dimensional information, i.e., volumetric or temporal knowledge, during fine-tuning. Simultaneously, we aim to harness SAM's pre-trained weights within its original 2D backbone to the fullest extent. In this paper, we introduce a modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable to various volumetric and video medical data. Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments while preserving the majority of SAM's pre-trained weights. By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data. We comprehensively evaluate our method on five medical image segmentation tasks, by using 11 public datasets across CT, MRI, and surgical video data. Remarkably, without using any prompt, our method consistently outperforms various state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical scene segmentation respectively. Our model also demonstrates strong generalization, and excels in challenging tumor segmentation when prompts are used. Our code is available at: https://github.com/cchen-cc/MA-SAM.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
6秒前
xue&dp完成签到,获得积分10
13秒前
NingJi应助醉熏的井采纳,获得10
15秒前
优雅橘子完成签到,获得积分10
18秒前
共享精神应助xue&dp采纳,获得10
19秒前
23秒前
32秒前
kin发布了新的文献求助10
39秒前
andrele应助温某人采纳,获得10
46秒前
饼泊酚发布了新的文献求助10
48秒前
50秒前
kin完成签到,获得积分10
55秒前
清爽冬莲完成签到 ,获得积分10
58秒前
zhengxu发布了新的文献求助20
1分钟前
科研通AI6.3应助寒澈采纳,获得10
1分钟前
1分钟前
爆米花应助饼泊酚采纳,获得10
1分钟前
1分钟前
poki完成签到 ,获得积分10
1分钟前
鲸鱼完成签到 ,获得积分10
1分钟前
1分钟前
1分钟前
饼泊酚完成签到,获得积分10
1分钟前
bkagyin应助水阔鱼沉采纳,获得10
1分钟前
xue&dp发布了新的文献求助10
1分钟前
寒澈发布了新的文献求助10
1分钟前
1分钟前
yehan完成签到,获得积分20
1分钟前
2分钟前
yehan发布了新的文献求助20
2分钟前
Sailzyf完成签到,获得积分10
2分钟前
尊敬的怀绿完成签到,获得积分10
2分钟前
2分钟前
2分钟前
2分钟前
寒澈完成签到,获得积分10
2分钟前
周周粥完成签到 ,获得积分10
2分钟前
2分钟前
Shangreat发布了新的文献求助10
2分钟前
GG发布了新的文献求助50
2分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Modern Epidemiology, Fourth Edition 5000
Handbook of pharmaceutical excipients, Ninth edition 5000
Digital Twins of Advanced Materials Processing 2000
Weaponeering, Fourth Edition – Two Volume SET 2000
Polymorphism and polytypism in crystals 1000
Social Cognition: Understanding People and Events 800
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6027643
求助须知:如何正确求助?哪些是违规求助? 7678621
关于积分的说明 16185555
捐赠科研通 5175088
什么是DOI,文献DOI怎么找? 2769194
邀请新用户注册赠送积分活动 1752596
关于科研通互助平台的介绍 1638401