图像融合
特征(语言学)
融合
人工智能
图像(数学)
材料科学
计算机科学
计算机视觉
模式识别(心理学)
哲学
语言学
作者
Qingyu Mao,Wenzhe Zhai,Xiang Lei,Zenghui Wang,Yongsheng Liang
出处
期刊:Electronics
[MDPI AG]
日期:2024-09-03
卷期号:13 (17): 3491-3491
被引量:1
标识
DOI:10.3390/electronics13173491
摘要
The fusion of multimodal medical images, particularly CT and MRI, is driven by the need to enhance the diagnostic process by providing clinicians with a single, comprehensive image that encapsulates all necessary details. Existing fusion methods often exhibit a bias towards features from one of the source images, making it challenging to simultaneously preserve both structural information and textural details. Designing an effective fusion method that can preserve more discriminative information is therefore crucial. In this work, we propose a Coupled Feature-Learning GAN (CFGAN) to fuse the multimodal medical images into a single informative image. The proposed method establishes an adversarial game between the discriminators and a couple of generators. First, the coupled generators are trained to generate two real-like fused images, which are then used to deceive the two coupled discriminators. Subsequently, the two discriminators are devised to minimize the structural distance to ensure the abundant information in the original source images is well-maintained in the fused image. We further empower the generators to be robust under various scales by constructing a discriminative feature extraction (DFE) block with different dilation rates. Moreover, we introduce a cross-dimension interaction attention (CIA) block to refine the feature representations. The qualitative and quantitative experiments on common benchmarks demonstrate the competitive performance of the CFGAN compared to other state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI