模态(人机交互)
计算机科学
人工智能
信息融合
融合
自然语言处理
计算机视觉
情报检索
语言学
哲学
作者
Xiaowen Zhang,Aiping Liu,Gang Yang,Yu Liu,Xun Chen
标识
DOI:10.1016/j.inffus.2024.102560
摘要
Multi-modal medical image fusion aims to integrate distinct imaging modalities to yield more comprehensive and precise medical images, which can benefit the subsequent image analysis tasks. However, prevailing state-of-the-art image fusion methods, despite substantial advancements, do not explicitly address the efficient handling of complementary information between modalities. Moreover, most current multi-modal medical image fusion methods encounter challenges when integrating with practical tasks, which lack the guidance of semantic information, thus hindering the generation of high-quality images for accurate identification of lesion areas. To deal with these challenges, this paper introduces a novel semantic information-guided modality-specific fusion network for multi-modal magnetic resonance (MR) images, named SIMFusion. Specifically, we propose the decomposition branch to capture common and specific features from MR images of different modalities for fusion to reduce information redundancy through correlated mutual information loss. Subsequently, we obtain their semantic characteristics via the semantic branch utilizing a pre-train segmentation network and, finally, achieve an adaptive balance between the two sets of features through a specialized fusion strategy. Extensive experiments demonstrate the superiority of SIMFusion over existing competing techniques on both BraTS2019 and ISLES2022 datasets, with 8.1% improvement in MI and 38.7 % in VIFF in T2-T1ce image pairs, highlighting the potential of the proposed method as a promising solution for MR image fusion in practical applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI