编码器
计算机科学
特征(语言学)
人工智能
模式识别(心理学)
融合
特征提取
图像融合
光学(聚焦)
计算机视觉
图像(数学)
语言学
操作系统
光学
物理
哲学
作者
Han Xu,Meiqi Gong,Xin Tian,Jun Huang,Jiayi Ma
标识
DOI:10.1016/j.cviu.2022.103407
摘要
In this paper, we propose a novel method for visible and infrared image fusion by decomposing feature information, which is termed as CUFD. It adopts two pairs of encoder–decoder networks to implement feature map extraction and decomposition, respectively. On the one hand, the shallow features of the image contain abundant information while the deep features focus more on extracting the thermal targets. Thus, we use an encoder–decoder network to extract both shallow and deep features. Unlike existing methods, both of the shallow and deep features are used for fusion and reconstruction with different emphases. On the other hand, the infrared and visible features of the same layer have both similarities and differences. Therefore, we train the other encoder–decoder network to decompose the feature maps into common and unique information based on their similarities and differences. After that, we apply different fusion rules according to the flexible requirements. This operation is more beneficial to retain the significant feature information in the fusion results. Qualitative and quantitative experiments on publicly available TNO and RoadScene datasets demonstrate the superiority of our CUFD over the state-of-the-art.
科研通智能强力驱动
Strongly Powered by AbleSci AI