计算机科学
图像融合
人工智能
自编码
图像(数学)
融合规则
模式识别(心理学)
代表(政治)
编码器
特征(语言学)
融合
光学(聚焦)
特征学习
深度学习
特征提取
情态动词
计算机视觉
特征检测(计算机视觉)
图像处理
物理
哲学
光学
语言学
化学
高分子化学
操作系统
法学
政治
政治学
作者
Xiaoqing Luo,Yuanhao Gao,Anqi Wang,Zhancheng Zhang,Xiao‐Jun Wu
标识
DOI:10.1109/tmm.2021.3129354
摘要
This paper proposes an image fusion framework based on separate representation learning, called IFSepR. We believe that both the co-modal image and the multi-modal image have common and private features based on prior knowledge, exploiting this disentangled representation can help to image fusion, especially to fusion rule design. Inspired by the autoencoder network and contrastive learning, a multi-branch encoder with contrastive constraints is built to learn the common and private features of paired images. In the fusion stage, based on the disentangled features, a general fusion rule is designed to integrate the private features, then combining the fused private features and the common feature are fed into the decoder, reconstructing the fused image. We perform a series of evaluations on three typical image fusion tasks, including multi-focus image fusion, infrared and visible image fusion, medical image fusion. Quantitative and qualitative comparison with five state-of-art image fusion methods demonstrates the advantages of our proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI