DBTrans: A Dual-Branch Vision Transformer for Multi-Modal Brain Tumor Segmentation

计算机科学 编码器 分割 变压器 人工智能 电压 物理 量子力学 操作系统
作者
Xinyi Zeng,Pinxian Zeng,Cheng Tang,Peng Wang,Binyu Yan,Yan Wang
出处
期刊:Lecture Notes in Computer Science 卷期号:: 502-512 被引量:8
标识
DOI:10.1007/978-3-031-43901-8_48
摘要

3D Spatially Aligned Multi-modal MRI Brain Tumor Segmentation (SAMM-BTS) is a crucial task for clinical diagnosis. While Transformer-based models have shown outstanding success in this field due to their ability to model global features using the self-attention mechanism, they still face two challenges. First, due to the high computational complexity and deficiencies in modeling local features, the traditional self-attention mechanism is ill-suited for SAMM-BTS tasks that require modeling both global and local volumetric features within an acceptable computation overhead. Second, existing models only stack spatially aligned multi-modal data on the channel dimension, without any processing for such multi-channel data in the model's internal design. To address these challenges, we propose a Transformer-based model for the SAMM-BTS task, namely DBTrans, with dual-branch architectures for both the encoder and decoder. Specifically, the encoder implements two parallel feature extraction branches, including a local branch based on Shifted Window Self-attention and a global branch based on Shuffle Window Cross-attention to capture both local and global information with linear computational complexity. Besides, we add an extra global branch based on Shifted Window Cross-attention to the decoder, introducing the key and value matrices from the corresponding encoder block, allowing the segmented target to access a more complete context during up-sampling. Furthermore, the above dual-branch designs in the encoder and decoder are both integrated with improved channel attention mechanisms to fully explore the contribution of features at different channels. Experimental results demonstrate the superiority of our DBTrans model in both qualitative and quantitative measures. Codes will be released at https://github.com/Aru321/DBTrans .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
狂野谷冬完成签到 ,获得积分10
刚刚
刚刚
1秒前
1秒前
ZexiWu发布了新的文献求助20
1秒前
玖念发布了新的文献求助10
2秒前
想毕业发布了新的文献求助40
2秒前
香蕉觅云应助科研通管家采纳,获得10
2秒前
kingwill应助科研通管家采纳,获得20
2秒前
2秒前
FashionBoy应助科研通管家采纳,获得10
2秒前
2秒前
英俊的铭应助科研通管家采纳,获得10
2秒前
小二郎应助科研通管家采纳,获得10
2秒前
顾矜应助科研通管家采纳,获得10
2秒前
糖异生完成签到,获得积分10
3秒前
传奇3应助科研通管家采纳,获得30
3秒前
天天快乐应助科研通管家采纳,获得10
3秒前
lanananan发布了新的文献求助10
3秒前
Wang完成签到,获得积分10
3秒前
香蕉觅云应助科研通管家采纳,获得10
3秒前
SciGPT应助科研通管家采纳,获得10
3秒前
ZJPPPP应助科研通管家采纳,获得10
3秒前
科研通AI5应助科研通管家采纳,获得10
3秒前
kingwill应助科研通管家采纳,获得20
3秒前
3秒前
汉堡包应助科研通管家采纳,获得10
3秒前
大模型应助科研通管家采纳,获得30
3秒前
上官若男应助科研通管家采纳,获得10
3秒前
充电宝应助科研通管家采纳,获得10
3秒前
合适板栗完成签到,获得积分20
3秒前
ZJPPPP应助科研通管家采纳,获得10
3秒前
浮游应助科研通管家采纳,获得10
3秒前
科目三应助qwepirt采纳,获得10
3秒前
ding应助科研通管家采纳,获得10
3秒前
XiaoLiu应助科研通管家采纳,获得10
4秒前
英姑应助科研通管家采纳,获得10
4秒前
ZJPPPP应助科研通管家采纳,获得10
4秒前
研友_想想发布了新的文献求助10
4秒前
科研通AI6应助科研通管家采纳,获得10
4秒前
高分求助中
计划经济时代的工厂管理与工人状况(1949-1966)——以郑州市国营工厂为例 500
INQUIRY-BASED PEDAGOGY TO SUPPORT STEM LEARNING AND 21ST CENTURY SKILLS: PREPARING NEW TEACHERS TO IMPLEMENT PROJECT AND PROBLEM-BASED LEARNING 500
The Pedagogical Leadership in the Early Years (PLEY) Quality Rating Scale 410
Why America Can't Retrench (And How it Might) 400
Stackable Smart Footwear Rack Using Infrared Sensor 300
Modern Britain, 1750 to the Present (第2版) 300
Writing to the Rhythm of Labor Cultural Politics of the Chinese Revolution, 1942–1976 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 催化作用 遗传学 冶金 电极 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 4604100
求助须知:如何正确求助?哪些是违规求助? 4012619
关于积分的说明 12424227
捐赠科研通 3693241
什么是DOI,文献DOI怎么找? 2036105
邀请新用户注册赠送积分活动 1069230
科研通“疑难数据库(出版商)”最低求助积分说明 953709