DMSA-UNet: Dual Multi-Scale Attention makes UNet more strong for medical image segmentation

对偶(语法数字) 比例(比率) 分割 人工智能 计算机视觉 计算机科学 图像(数学) 地图学 地理 艺术 文学类
作者
Xiang Li,Chong Fu,Qun Wang,Wenchao Zhang,Chiu‐Wing Sham,Junxin Chen
出处
期刊:Knowledge Based Systems [Elsevier BV]
卷期号:299: 112050-112050 被引量:10
标识
DOI:10.1016/j.knosys.2024.112050
摘要

Convolutional Neural Networks (CNNs), particularly UNet, have become prevalent in medical image segmentation tasks. However, CNNs inherently struggle to capture global dependencies owing to their intrinsic localities. Although Transformers have shown superior performance in modeling global dependencies, they encounter the challenges of high model complexity and dependencies on large-scale pre-trained models. Furthermore, the current attention mechanisms of Transformers only consider single-scale feature interactions, making it difficult to analyze feature correlations at different scales in the same attention layer. In this paper, we propose DMSA-UNet, which strengthens the global analysis capability and maximally preserves the local inductive bias capability while maintaining low model complexity. Specifically, we reformulate vanilla self-attention as efficient Dual Multi-Scale Attention (DMSA) that captures multi-scale-enhanced global information along both spatial and channel dimensions with linear complexity and pixel granularity. We also introduce a context-gated linear unit in DMSA for each feature to obtain adaptive attention based on neighboring contexts. To preserve the convolutional properties, DMSAs are inserted directly between the UNet's convolutional blocks rather than replacing them. Because DMSA has multi-scale adaptive aggregation capability, the deepest convolutional block of UNet is removed to mitigate the noise interference caused by fixed convolutional kernels with large receptive fields. We further leverage efficient convolution to reduce computational redundancy. DMSA-UNet is highly competitive in terms of model complexity, with 33% fewer parameters and 15% fewer FLOPs (at 2242 resolution) than UNet. Extensive experimental results on four different medical datasets demonstrate that DMSA-UNet outperforms other state-of-the-art approaches without any pre-trained models.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Tianju发布了新的文献求助50
刚刚
wd发布了新的文献求助30
刚刚
1秒前
1秒前
wu8577应助lan采纳,获得10
2秒前
hhh发布了新的文献求助30
3秒前
3秒前
Zhai发布了新的文献求助10
4秒前
5秒前
Dr大壮发布了新的文献求助10
6秒前
7秒前
量子星尘发布了新的文献求助30
7秒前
hulin_zjxu完成签到,获得积分10
9秒前
9秒前
王一山发布了新的文献求助20
9秒前
哭泣乌完成签到,获得积分10
11秒前
yhbk完成签到 ,获得积分10
12秒前
猪猪hero应助是述不是沭采纳,获得10
12秒前
zhaoxiao完成签到 ,获得积分10
12秒前
mary发布了新的文献求助10
13秒前
梓墨完成签到,获得积分10
13秒前
13秒前
15秒前
Orange应助Dr_zhangkai采纳,获得30
16秒前
zhaoxiao发布了新的文献求助10
17秒前
Jason完成签到,获得积分10
18秒前
深情安青应助科研通管家采纳,获得10
19秒前
Owen应助科研通管家采纳,获得30
19秒前
完美世界应助科研通管家采纳,获得10
19秒前
大模型应助科研通管家采纳,获得10
19秒前
脑洞疼应助科研通管家采纳,获得10
19秒前
无花果应助科研通管家采纳,获得10
19秒前
共享精神应助科研通管家采纳,获得10
19秒前
LaTeXer应助科研通管家采纳,获得50
19秒前
风清扬应助科研通管家采纳,获得10
20秒前
20秒前
20秒前
SciGPT应助皮崇知采纳,获得10
21秒前
在逃跑的康熙大帝在大笑完成签到,获得积分10
22秒前
22秒前
高分求助中
The Mother of All Tableaux Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 2400
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications 800
Official Methods of Analysis of AOAC INTERNATIONAL 600
ACSM’s Guidelines for Exercise Testing and Prescription, 12th edition 588
T/CIET 1202-2025 可吸收再生氧化纤维素止血材料 500
Interpretation of Mass Spectra, Fourth Edition 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3956172
求助须知:如何正确求助?哪些是违规求助? 3502400
关于积分的说明 11107420
捐赠科研通 3232954
什么是DOI,文献DOI怎么找? 1787093
邀请新用户注册赠送积分活动 870482
科研通“疑难数据库(出版商)”最低求助积分说明 802019