期刊:Journal of Electronic Imaging [SPIE - International Society for Optical Engineering] 日期:2023-11-20卷期号:32 (06)被引量:1
标识
DOI:10.1117/1.jei.32.6.063014
摘要
Existing methods for fusing infrared and visible images prioritize the fusion effect at the expense of the model size and inevitably tend to be more oriented toward infrared images during fusion, which results in fused images that can lack the texture detail information of visible images. Therefore, a new feature gradient attention block is designed in our model to extract the texture detail of the image, in which the module extracts the gradient information of the original image while extracting features, then uses depthwise separable convolution to optimize and enhance the information with rich edge features. To preserve the original image information, we also use short links to reference previous features. Since the important features are strengthened in the feature extraction stage, we design an adaptive weight energy attention network based on the energy fusion strategy in the fusion stage to further preserve the thermal radiation area of the infrared image and the spatial details of the visible image. The proposed method is experimentally verified on the public visible-infrared paired dataset for low-light vision and the TNO dataset, and six objective evaluation indicators are used to prove that our model is better to the existing fusion algorithms. In addition, we further verify the effectiveness of the proposed method for high-level vision task models by object detection and semantic segmentation models.