计算机科学
人工智能
图像融合
卷积神经网络
特征提取
模式识别(心理学)
融合
计算机视觉
融合规则
特征(语言学)
目标检测
图像(数学)
哲学
语言学
作者
S.H. Park,An Gia Vien,Chul Lee
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2023-06-26
卷期号:34 (2): 770-785
被引量:18
标识
DOI:10.1109/tcsvt.2023.3289170
摘要
Image fusion techniques aim to generate more informative images by merging multiple images of different modalities with complementary information. Despite significant fusion performance improvements of recent learning-based approaches, most fusion algorithms have been developed based on convolutional neural networks (CNNs), which stack deep layers to obtain a large receptive field for feature extraction. However, important details and contexts of the source images may be lost through a series of convolution layers. In this work, we propose a cross-modal transformer-based fusion (CMTFusion) algorithm for infrared and visible image fusion that captures global interactions by faithfully extracting complementary information from source images. Specifically, we first extract the multiscale feature maps of infrared and visible images. Then, we develop cross-modal transformers (CMTs) to retain complementary information in the source images by removing redundancies in both the spatial and channel domains. To this end, we design a gated bottleneck that integrates cross-domain interaction to consider the characteristics of the source images. Finally, a fusion result is obtained by exploiting spatial-channel information in refined feature maps using a fusion block. Experimental results on multiple datasets demonstrate that the proposed algorithm provides better fusion performance than state-of-the-art infrared and visible image fusion algorithms, both quantitatively and qualitatively. Furthermore, we show that the proposed algorithm can be used to improve the performance of computer vision tasks, e.g., object detection and monocular depth estimation.
科研通智能强力驱动
Strongly Powered by AbleSci AI