人工智能
图像融合
融合
计算机科学
编码器
计算机视觉
模式识别(心理学)
特征(语言学)
图像(数学)
融合规则
比例(比率)
分解
生物
操作系统
量子力学
物理
哲学
语言学
生态学
作者
Guanzheng Cheng,Lizuo Jin,Lin Chai
标识
DOI:10.1109/ccdc58219.2023.10326978
摘要
The fusion of infrared and visible images is a hot field in image processing, aiming to preserve the prominent targets in infrared images and the clear background texture in visible images. This paper proposes a novel auto-encoder framework for infrared and visible image fusion based on dual-scale decomposition and a learnable attention fusion strategy. The core idea is that the encoder decomposes the image into low-level multi-scale features, deep-level difference features, and common features. And we use a two-stage training strategy. In the first stage, the auto-encoder network is trained to decompose, extract features, and reconstruct images. In the second stage, the learnable attention-based fusion network is trained using the proposed loss function, which enables the learnable fusion network to learn different appropriate fusion strategies for different levels of feature layers. The results show that our fusion framework has achieved better performance than the state-of-the-art methods in both subjective and objective evaluation. And our proposed method achieves better values on 6 out of 8 common quality metrics.
科研通智能强力驱动
Strongly Powered by AbleSci AI