图像融合
人工智能
计算机科学
计算机视觉
融合
图像分辨率
分辨率(逻辑)
图像(数学)
模式识别(心理学)
语言学
哲学
作者
Wanxin Xiao,Yafei Zhang,Hongbin Wang,Fan Li,Hua Jin
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:71: 1-15
被引量:42
标识
DOI:10.1109/tim.2022.3149101
摘要
Recently, infrared–visible image fusion has attracted more and more attention, and numerous excellent methods in this field have emerged. However, when the low-resolution images are being fused, most fusion results are of low resolution, limiting the practical application of the fusion results. Although some methods can simultaneously realize the fusion and super-resolution of low-resolution images, the improvement of fusion performance is limited due to the lack of guidance of high-resolution fusion results. To address this issue, we propose a heterogeneous knowledge distillation network (HKDnet) with multilayer attention embedding to jointly implement the fusion and super-resolution of infrared and visible images. Precisely, the proposed method consists of a high-resolution image fusion network (teacher network) and a low-resolution image fusion and super-resolution network (student network). The teacher network mainly fuses the high-resolution input images and guides the student network to obtain the ability of joint implementation of fusion and super-resolution. In order to make the student network pay more attention to the texture details of the visible input image, we designed a corner embedding attention mechanism. The mechanism integrates channel attention, position attention, and corner attention to highlight the visible image’s edge, texture, and structure. For the input infrared image, the dual-frequency attention (DFA) is constructed by mining the relationship of interlayer features to highlight the role of salient targets of the infrared image in the fusion result. The experimental results show that compared with the existing methods, the proposed method preserves the image information of both visible and infrared modalities, achieves sound visual effects, and displays accurate and natural texture details. The code of the proposed method can be available at https://github.com/firewaterfire/HKDnet .
科研通智能强力驱动
Strongly Powered by AbleSci AI