计算机科学
人工智能
变压器
编码器
图像融合
融合
卷积神经网络
模式识别(心理学)
图像(数学)
计算机视觉
工程类
语言学
操作系统
电气工程
哲学
电压
作者
Vibashan VS,Jeya Maria Jose Valanarasu,Poojan Oza,Vishal M. Patel
标识
DOI:10.1109/icip46576.2022.9897280
摘要
In image fusion, images obtained from different sensors are fused to generate a single image with enhanced information. In recent years, state-of-the-art methods have adopted Convolution Neural Networks (CNNs) to encode meaningful features for image fusion. Specifically, CNN-based methods perform image fusion by fusing local features. However, they do not consider long-range dependencies that are present in the image. Transformer-based models are designed to overcome this by modelling the long-range dependencies with the help of self-attention mechanism. This motivates us to propose a novel Image Fusion Transformer (IFT) where we develop a transformer-based multi-scale fusion strategy that attends to both local and long-range information (or global context). The proposed method follows a two-stage training approach. In the first stage, we train an auto-encoder to extract deep features at multiple scales. In the second stage, multi-scale features are fused using a Spatio-Transformer (ST) fusion strategy. The ST fusion blocks are comprised of a CNN and a transformer branch which captures local and long-range features, respectively. Extensive experiments on multiple benchmark datasets show that the proposed method performs better than many competitive fusion algorithms. Furthermore, we show the effectiveness of the proposed ST fusion strategy with an ablation analysis. 1
科研通智能强力驱动
Strongly Powered by AbleSci AI