模态(人机交互)
计算机科学
模式
分割
人工智能
模式识别(心理学)
冗余(工程)
社会科学
社会学
操作系统
作者
Junjie Shi,Li Yu,Qimin Cheng,Xin Yang,Kwang‐Ting Cheng,Zengqiang Yan
出处
期刊:IEEE Journal of Biomedical and Health Informatics
[Institute of Electrical and Electronics Engineers]
日期:2023-10-20
卷期号:28 (1): 379-390
被引量:8
标识
DOI:10.1109/jbhi.2023.3326151
摘要
Brain tumor segmentation is a fundamental task and existing approaches usually rely on multi-modality magnetic resonance imaging (MRI) images for accurate segmentation. However, the common problem of missing/incomplete modalities in clinical practice would severely degrade their segmentation performance, and existing fusion strategies for incomplete multi-modality brain tumor segmentation are far from ideal. In this work, we propose a novel framework named M 2 FTrans to explore and fuse cross-modality features through modality-masked fusion transformers under various incomplete multi-modality settings. Considering vanilla self-attention is sensitive to missing tokens/inputs, both learnable fusion tokens and masked self-attention are introduced to stably build long-range dependency across modalities while being more flexible to learn from incomplete modalities. In addition, to avoid being biased toward certain dominant modalities, modality-specific features are further re-weighted through spatial weight attention and channel- wise fusion transformers for feature redundancy reduction and modality re-balancing. In this way, the fusion strategy in M 2 FTrans is more robust to missing modalities. Experimental results on the widely-used BraTS2018, BraTS2020, and BraTS2021 datasets demonstrate the effectiveness of M 2 FTrans, outperforming the state-of-the-art approaches with large margins under various incomplete modalities for brain tumor segmentation. Code is available at https://github.com/Jun-Jie-Shi/M2FTrans.
科研通智能强力驱动
Strongly Powered by AbleSci AI