图像融合
人工智能
融合
计算机科学
人类视觉系统模型
计算机视觉
保险丝(电气)
图像质量
视觉感受
模式识别(心理学)
图像(数学)
特征(语言学)
噪音(视频)
过程(计算)
感知
物理
哲学
操作系统
生物
量子力学
神经科学
语言学
作者
Zhiqiang Zhou,Erfang Fei,Lingjuan Miao,Rao Yang
标识
DOI:10.1016/j.inffus.2022.12.022
摘要
Infrared–visible image fusion is of great value in many applications due to their highly complementary information. However, it is hard to obtain high-quality fused image for current fusion algorithms. In this paper, we reveal an underlying deficiency in current fusion framework limiting the quality of fusion, i.e., the visual features used in the fusion can be easily affected by external physical conditions (e.g., the characteristics of different sensors and environmental illumination), indicating that those features from different sources have not been ensured to be fused on a consistent basis during the fusion. Inspired by biological vision, we derive a framework that transforms the image intensities into the visual response space of human visual system (HVS), within which all features are fused in the same perceptual state, eliminating the external physical factors that may influence the fusion process. The proposed framework incorporates some key characteristics of HVS that facilitate the simulation of human visual response in complex scenes, and is built on a new variant of multiscale decomposition, which can accurately localize image structures of different scales in visual-response simulation and feature fusion. A bidirectional saliency aggregation is proposed to fuse the perceived contrast features within the visual response space of HVS, along with an adaptive suppression of noise and intensity-saturation in this space prior to the fusion. The final fused image is obtained by transforming the fusion results in human visual response space back to the physical domain. Experiments demonstrate the significant improvement of fusion quality brought about by the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI