计算机科学
翻译(生物学)
融合
特征(语言学)
图像融合
人工智能
计算机视觉
流离失所(心理学)
图像配准
图像(数学)
约束(计算机辅助设计)
模式识别(心理学)
数学
信使核糖核酸
几何学
哲学
基因
生物化学
语言学
化学
心理治疗师
心理学
作者
Huafeng Li,Junzhi Zhao,Jinxing Li,Zhengtao Yu,Guangming Lu
标识
DOI:10.1016/j.inffus.2023.02.011
摘要
Translational displacement between source images from different sensors is a general phenomenon, which will cause performance degradation on image fusion. To tackle this issue, a straightforward way is to make source images registration first. However, due to the large modality-gap between the infrared image and the visible image, it is too challenging to achieve completely registered images. In this paper, a novel registration-free fusion method is primarily proposed for infrared and visible images with translational displacement, which transforms the problem of image registration to feature alignment in an end-to-end framework. Specifically, we propose a cross-modulation strategy followed by feature dynamic alignment, so that the spatial correlation of shifts is adaptively measured and the aligned features can be dynamically extracted. A feature refinement module is additionally designed based on the local similarity, which enhances the textures related information while suppresses artifacts related information. Thanks to these strategies, our experimental results on infrared–visible images with translational displacement achieve dramatic enhancement compared with state-of-the-arts. To the best of our knowledge, this is the first work on infrared–visible image fusion without strict registration. It does break the constraint of existing image-registration based two-step strategies and provide a simple but efficient way for multi-modal image fusion. The source code will be released at https://github.com/lhf12278/RFVIF.
科研通智能强力驱动
Strongly Powered by AbleSci AI