计算机科学
人工智能
模式识别(心理学)
特征提取
卷积神经网络
计算机视觉
作者
Wei Tang,Fazhi He,Yü Liu
标识
DOI:10.1016/j.patcog.2022.109295
摘要
Infrared and visible image fusion aims to obtain a synthetic image that can simultaneously exhibit salient objects and provide abundant texture details. However, existing deep learning-based methods generally depend on convolutional operations, which indeed have good local feature extraction ability, but the restricted receptive field limits its capability in modeling long-range dependencies. To conquer this dilemma, we propose an infrared and visible image fusion method based on Transformer and cross correlation, named TCCFusion. Specifically, we design a local feature extraction branch (LFEB) to preserve local complementary information, in which a dense-shape network is introduced to reuse the information that may be lost during the convolutional operation. To avoid the limitation of the receptive field and to fully extract the global significant information, a global feature extraction branch (GFEB) is devised that consists of three Transformer blocks for long-range relationship construction. In addition, LFEB and GFEB are arranged in a parallel fashion to maintain local and global useful information in a more effective way. Furthermore, we design a cross correlation loss to train the proposed fusion model in an unsupervised manner, with which the fusion result can obtain adequate thermal radiation information in an infrared image and ample texture details in a visible image. Massive experiments on two mainstream datasets illustrate that our TCCFusion outperforms state-of-the-art algorithms not only on visual quality but also on quantitative assessments. Ablation experiments on the network framework and objective function demonstrate the effectiveness of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI