模式
计算机科学
人工智能
深度学习
融合
机器学习
图像融合
多模态
传感器融合
信息融合
模式治疗法
多模式学习
模态(人机交互)
图像(数学)
医学
语言学
哲学
外科
社会科学
社会学
万维网
作者
Yihao Li,Mostafa El Habib Daho,Pierre-Henri Conze,Rachid Zeghlache,Hugo Le Boité,Ramin Tadayoni,Béatrice Cochener,Mathieu Lamard,Gwénolé Quellec
标识
DOI:10.1016/j.compbiomed.2024.108635
摘要
Multimodal medical imaging plays a pivotal role in clinical diagnosis and research, as it combines information from various imaging modalities to provide a more comprehensive understanding of the underlying pathology. Recently, deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification. This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks. We explore the complementary relationships among prevalent clinical modalities and outline three main fusion schemes for multimodal classification networks: input fusion, intermediate fusion (encompassing single-level fusion, hierarchical fusion, and attention-based fusion), and output fusion. By evaluating the performance of these fusion techniques, we provide insight into the suitability of different network architectures for various multimodal fusion scenarios and application domains. Furthermore, we delve into challenges related to network architecture selection, handling incomplete multimodal data management, and the potential limitations of multimodal fusion. Finally, we spotlight the promising future of Transformer-based multimodal fusion techniques and give recommendations for future research in this rapidly evolving field.
科研通智能强力驱动
Strongly Powered by AbleSci AI