计算机科学
人工智能
可解释性
编码器
模式识别(心理学)
图像融合
合并(版本控制)
融合
分类器(UML)
深度学习
可视化
融合规则
图像(数学)
计算机视觉
情报检索
哲学
操作系统
语言学
作者
Linfeng Tang,Ziang Chen,Jun Huang,Jiayi Ma
标识
DOI:10.1109/tmm.2023.3326296
摘要
Image fusion aims to integrate the complementary information of source images and synthesize a single fused image. Existing image fusion algorithms apply hand-crafted fusion rules to merge deep features which cause information loss and limit the fusion performance of methods since the uninterpretability of deep learning. To overcome the above shortcomings, we propose a learnable fusion rule for infrared and visible image fusion based on class activation mapping. Our proposed fusion rule can selectively preserve meaningful information and reduce distortion. More specifically, we first train an encoder-decoder network and an auxiliary classifier based on the shared encoder. Then, the class activation weights are taken out from the auxiliary classifier, which indicates the importance of each channel. Finally, the deep features extracted by the encoder are adaptively fused according to the class activation weights and the fused image is reconstructed from the fused features via the pre-trained decoder. Note that our learnable fusion rule can automatically measure the importance of each deep feature without human participation. Moreover, it fully preserves the significant features of source images such as salient targets and texture details. Extensive experiments manifest our superiority over state-of-the-art algorithms. Visualization of feature maps and their corresponding weights reveals the high interpretability of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI