红外线的
图像融合
计算机科学
突出
计算机视觉
人工智能
融合
亮度
图像(数学)
模式识别(心理学)
光学
物理
语言学
哲学
作者
Qiao Li Yang,Yu Zhang,Zijing Zhao,Jian Zhang,Shunli Zhang
出处
期刊:IEEE Signal Processing Letters
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:31: 1374-1378
被引量:2
标识
DOI:10.1109/lsp.2024.3399119
摘要
Infrared and visible image fusion (IVIF) aims to create fused images that encompass the comprehensive features of both input images, thereby facilitating downstream vision tasks. However, existing methods often overlook illumination conditions in low-light environments, resulting in fused images where targets lack prominence. To address these shortcomings, we introduce the Illumination-Aware Infrared and Visible Image Fusion Network, abbreviated by IAIFNet. Within our framework, an illumination enhancement network initially estimates the incident illumination maps of input images, based on which the textural details of input images under low-light conditions are enhanced specifically. Subsequently, an image fusion network adeptly merges the salient features of illumination-enhanced infrared and visible images to produce a fusion image of superior visual quality. Our network incorporates a Salient Target Aware Module (STAM) and an Adaptive Differential Fusion Module (ADFM) to respectively enhance gradient and contrast with sensitivity to brightness. Extensive experimental results validate the superiority of our method over seven state-of-the-art approaches for fusing infrared and visible images on the public LLVIP dataset. Additionally, the lightweight design of our framework enables highly efficient fusion of infrared and visible images. Finally, evaluation results on the downstream multi-object detection task demonstrate the significant performance boost our method provides for detecting objects in low-light environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI