DBTrans: A Dual-Branch Vision Transformer for Multi-Modal Brain Tumor Segmentation

计算机科学 编码器 分割 变压器 人工智能 电压 物理 量子力学 操作系统
作者
Xinyi Zeng,Pinxian Zeng,Cheng Tang,Peng Wang,Binyu Yan,Yan Wang
出处
期刊:Lecture Notes in Computer Science 卷期号:: 502-512 被引量:8
标识
DOI:10.1007/978-3-031-43901-8_48
摘要

3D Spatially Aligned Multi-modal MRI Brain Tumor Segmentation (SAMM-BTS) is a crucial task for clinical diagnosis. While Transformer-based models have shown outstanding success in this field due to their ability to model global features using the self-attention mechanism, they still face two challenges. First, due to the high computational complexity and deficiencies in modeling local features, the traditional self-attention mechanism is ill-suited for SAMM-BTS tasks that require modeling both global and local volumetric features within an acceptable computation overhead. Second, existing models only stack spatially aligned multi-modal data on the channel dimension, without any processing for such multi-channel data in the model's internal design. To address these challenges, we propose a Transformer-based model for the SAMM-BTS task, namely DBTrans, with dual-branch architectures for both the encoder and decoder. Specifically, the encoder implements two parallel feature extraction branches, including a local branch based on Shifted Window Self-attention and a global branch based on Shuffle Window Cross-attention to capture both local and global information with linear computational complexity. Besides, we add an extra global branch based on Shifted Window Cross-attention to the decoder, introducing the key and value matrices from the corresponding encoder block, allowing the segmented target to access a more complete context during up-sampling. Furthermore, the above dual-branch designs in the encoder and decoder are both integrated with improved channel attention mechanisms to fully explore the contribution of features at different channels. Experimental results demonstrate the superiority of our DBTrans model in both qualitative and quantitative measures. Codes will be released at https://github.com/Aru321/DBTrans .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Biu忒佛完成签到,获得积分10
刚刚
2秒前
甜甜的寻真完成签到,获得积分10
2秒前
2秒前
3秒前
研友_VZG7GZ应助wen采纳,获得10
4秒前
灰烬使者发布了新的文献求助10
6秒前
英姑应助暮晓见采纳,获得10
7秒前
小吴同学发布了新的文献求助10
7秒前
笑点低的芷巧应助xing采纳,获得10
8秒前
12秒前
12秒前
羊羊羊发布了新的文献求助10
12秒前
丘比特应助两张采纳,获得10
13秒前
科研通AI6应助肖肖采纳,获得10
15秒前
16秒前
17秒前
17秒前
XIN发布了新的文献求助10
17秒前
18秒前
尼古拉斯发布了新的文献求助10
19秒前
蓝天发布了新的文献求助10
20秒前
酷炫翠柏发布了新的文献求助10
21秒前
23秒前
浮游应助科研通管家采纳,获得10
23秒前
浮游应助科研通管家采纳,获得10
23秒前
23秒前
浮游应助科研通管家采纳,获得10
23秒前
yao chen发布了新的文献求助10
23秒前
上官若男应助科研通管家采纳,获得10
24秒前
Luna_aaa应助科研通管家采纳,获得10
24秒前
科研通AI6应助茶多一点酚采纳,获得10
24秒前
浮游应助科研通管家采纳,获得10
24秒前
浮游应助科研通管家采纳,获得10
24秒前
汉堡包应助科研通管家采纳,获得20
24秒前
含糊的画板完成签到,获得积分10
24秒前
Luna_aaa应助科研通管家采纳,获得10
24秒前
浮游应助科研通管家采纳,获得10
24秒前
顾矜应助科研通管家采纳,获得10
24秒前
Orange应助科研通管家采纳,获得10
24秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Encyclopedia of Reproduction Third Edition 3000
Comprehensive Methanol Science Production, Applications, and Emerging Technologies 2000
化妆品原料学 1000
Psychology of Self-Regulation 600
1st Edition Sports Rehabilitation and Training Multidisciplinary Perspectives By Richard Moss, Adam Gledhill 600
Qualitative Data Analysis with NVivo By Jenine Beekhuyzen, Pat Bazeley · 2024 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5637910
求助须知:如何正确求助?哪些是违规求助? 4744414
关于积分的说明 15000761
捐赠科研通 4796111
什么是DOI,文献DOI怎么找? 2562349
邀请新用户注册赠送积分活动 1521868
关于科研通互助平台的介绍 1481716