亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

DMSA-UNet: Dual Multi-Scale Attention makes UNet more strong for medical image segmentation

对偶(语法数字) 比例(比率) 分割 人工智能 计算机视觉 计算机科学 图像(数学) 地图学 地理 艺术 文学类
作者
Xiang Li,Chong Fu,Qun Wang,Wenchao Zhang,Chiu‐Wing Sham,Junxin Chen
出处
期刊:Knowledge Based Systems [Elsevier]
卷期号:299: 112050-112050 被引量:13
标识
DOI:10.1016/j.knosys.2024.112050
摘要

Convolutional Neural Networks (CNNs), particularly UNet, have become prevalent in medical image segmentation tasks. However, CNNs inherently struggle to capture global dependencies owing to their intrinsic localities. Although Transformers have shown superior performance in modeling global dependencies, they encounter the challenges of high model complexity and dependencies on large-scale pre-trained models. Furthermore, the current attention mechanisms of Transformers only consider single-scale feature interactions, making it difficult to analyze feature correlations at different scales in the same attention layer. In this paper, we propose DMSA-UNet, which strengthens the global analysis capability and maximally preserves the local inductive bias capability while maintaining low model complexity. Specifically, we reformulate vanilla self-attention as efficient Dual Multi-Scale Attention (DMSA) that captures multi-scale-enhanced global information along both spatial and channel dimensions with linear complexity and pixel granularity. We also introduce a context-gated linear unit in DMSA for each feature to obtain adaptive attention based on neighboring contexts. To preserve the convolutional properties, DMSAs are inserted directly between the UNet's convolutional blocks rather than replacing them. Because DMSA has multi-scale adaptive aggregation capability, the deepest convolutional block of UNet is removed to mitigate the noise interference caused by fixed convolutional kernels with large receptive fields. We further leverage efficient convolution to reduce computational redundancy. DMSA-UNet is highly competitive in terms of model complexity, with 33% fewer parameters and 15% fewer FLOPs (at 2242 resolution) than UNet. Extensive experimental results on four different medical datasets demonstrate that DMSA-UNet outperforms other state-of-the-art approaches without any pre-trained models.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
小二郎应助危笑采纳,获得20
刚刚
2秒前
2秒前
kzwtj完成签到,获得积分10
5秒前
6秒前
kzwtj发布了新的文献求助10
9秒前
初始发布了新的文献求助10
9秒前
9秒前
11秒前
15秒前
科研通AI6应助科研通管家采纳,获得10
15秒前
lyn123发布了新的文献求助10
15秒前
小二郎应助科研通管家采纳,获得10
16秒前
CAOHOU应助科研通管家采纳,获得10
16秒前
CAOHOU应助科研通管家采纳,获得10
16秒前
CAOHOU应助科研通管家采纳,获得10
16秒前
酷波er应助小池采纳,获得10
18秒前
西蓝花战士完成签到 ,获得积分10
27秒前
cm发布了新的文献求助20
28秒前
科研通AI6.1应助muuuu采纳,获得30
30秒前
30秒前
枝头树上的布谷鸟完成签到 ,获得积分10
31秒前
小二郎应助lyn123采纳,获得10
31秒前
kohu完成签到,获得积分10
33秒前
CodeCraft应助ZYK采纳,获得10
35秒前
ZYK完成签到,获得积分20
38秒前
40秒前
希望天下0贩的0应助SIKI采纳,获得10
43秒前
44秒前
45秒前
echo发布了新的文献求助10
46秒前
XUAN发布了新的文献求助10
50秒前
Dreamstar完成签到,获得积分10
50秒前
科研通AI6.1应助无忧采纳,获得10
51秒前
51秒前
功夫小猫发布了新的文献求助10
51秒前
无私白风发布了新的文献求助10
57秒前
功夫小猫完成签到,获得积分10
58秒前
59秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Introduction to strong mixing conditions volume 1-3 5000
Agyptische Geschichte der 21.30. Dynastie 3000
Les Mantodea de guyane 2000
„Semitische Wissenschaften“? 1510
从k到英国情人 1500
Cummings Otolaryngology Head and Neck Surgery 8th Edition 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5754644
求助须知:如何正确求助?哪些是违规求助? 5488236
关于积分的说明 15380380
捐赠科研通 4893172
什么是DOI,文献DOI怎么找? 2631766
邀请新用户注册赠送积分活动 1579709
关于科研通互助平台的介绍 1535463