人工智能
计算机科学
计算机视觉
图像融合
特征提取
特征(语言学)
模式识别(心理学)
分割
图像分割
融合
图像渐变
图像纹理
图像(数学)
语言学
哲学
作者
Xingyue Zou,Jiqiang Tang,Luqi Yang,Zhenhang Zhu
标识
DOI:10.1117/1.jei.32.6.063014
摘要
Existing methods for fusing infrared and visible images prioritize the fusion effect at the expense of the model size and inevitably tend to be more oriented toward infrared images during fusion, which results in fused images that can lack the texture detail information of visible images. Therefore, a new feature gradient attention block is designed in our model to extract the texture detail of the image, in which the module extracts the gradient information of the original image while extracting features, then uses depthwise separable convolution to optimize and enhance the information with rich edge features. To preserve the original image information, we also use short links to reference previous features. Since the important features are strengthened in the feature extraction stage, we design an adaptive weight energy attention network based on the energy fusion strategy in the fusion stage to further preserve the thermal radiation area of the infrared image and the spatial details of the visible image. The proposed method is experimentally verified on the public visible-infrared paired dataset for low-light vision and the TNO dataset, and six objective evaluation indicators are used to prove that our model is better to the existing fusion algorithms. In addition, we further verify the effectiveness of the proposed method for high-level vision task models by object detection and semantic segmentation models.
科研通智能强力驱动
Strongly Powered by AbleSci AI