计算机科学
图像融合
人工智能
计算机视觉
合并(版本控制)
融合
图像(数学)
情报检索
语言学
哲学
作者
Guangkai Sun,Mingli Dong,Mingxin Yu,Lianqing Zhu
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2025-01-01
卷期号:: 1-1
标识
DOI:10.1109/tim.2025.3527534
摘要
The purpose of infrared and visible (Inf-Vis) image fusion is to merge images from two different wavelength cameras into a fused image, aiming to obtain more information and richer visual content than a single image can provide. To be able to better extract the local key information in the image, and reduce the redundant information in the image, this paper advances a hybrid attention-based fusion algorithm for illumination-aware Inf-Vis images (HAIAFusion). Specifically, we introduce a DenseNet-201-based illumination-aware sub-network, and we also propose a multi-modal differential perception fusion module based on a hybrid attention that integrates channel attention, position attention, and corner attention in a cascade fashion. Through extensive simulation experiments, our algorithm outperforms state-of-the-art (SOTA) methods in terms of performance. The generated fused images are richer in information, have higher contrast, are closer to human visual perception. Furthermore, this approach enhances the preservation of the source image's edge information. All these outcomes demonstrate the significant potential of the HAIAFusion algorithm within the domain of illumination-aware Inf-Vis image fusion. This algorithm offers notable performance improvements for achieving various related visual tasks. Our code will be available at https://github.com/sunyichen1994/HAIAFusion.git.
科研通智能强力驱动
Strongly Powered by AbleSci AI