相互信息
计算机科学
图像融合
人工智能
代表(政治)
图像(数学)
融合
表达式(计算机科学)
模式识别(心理学)
计算机视觉
数据挖掘
政治学
法学
程序设计语言
语言学
哲学
政治
作者
Wanwan Huang,Han Zhang,Zeyuan Li,Yanbin Yin
标识
DOI:10.1109/bibm55620.2022.9995221
摘要
Deep learning-based method for medical image fusion has become a hot topic in recent years. However, they ignore the expression of the most important features in image fusion and only extract the general features for medical image fusion, which will restrict the expression of unique information on the fusion image. To address this restriction, we propose a novel disentangled representation network for medical image fusion with mutual information estimation, which extract the disentangled features of medical image fusion, i.e., the shared and exclusive features between multi-model medical images. In our method, we use the cross mutual information method to obtain the shared features of each modality pair, which enforce the fusion network to achieve the maximum of mutual information estimation for multi-modal medical images. The exclusive features are extracted by the adversarial objective method and it constrains the fusion network with the optimization to the minimum of mutual information estimation between shared and exclusive features. These disentangled features effectively take the interpretative advantages and make the fusion image retaining more details from source images as well as improving the visual quality of fusion image. Our method has achieved better results than several state-of-the-art methods. Both qualitative and quantitative experiments have proved the superiority of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI