残余物
人工智能
计算机科学
模式识别(心理学)
卷积神经网络
特征提取
融合
变压器
深度学习
特征学习
算法
工程类
语言学
电气工程
哲学
电压
作者
Zhishe Wang,Yanlin Chen,Wenyu Shao,Hui Li,Lei Zhang
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:71: 1-12
被引量:64
标识
DOI:10.1109/tim.2022.3191664
摘要
The existing deep learning fusion methods mainly concentrate on the convolutional neural networks, and few attempts are made with transformer. Meanwhile, the convolutional operation is a content-independent interaction between the image and convolution kernel, which may lose some important contexts and further limit fusion performance. Towards this end, we present a simple and strong fusion baseline for infrared and visible images, namely Residual Swin Transformer Fusion Network, termed SwinFuse. Our SwinFuse includes three parts: the global feature extraction, fusion layer and feature reconstruction. In particular, we build a fully attentional feature encoding backbone to model the long-range dependency, which is a pure transformer network and has a stronger representation ability compared with the convolutional neural networks. Moreover, we design a novel feature fusion strategy based on L1-norm for sequence matrices, and measure the corresponding activity levels from row and column vector dimensions, which can well retain competitive infrared brightness and distinct visible details. Finally, we testify our SwinFuse with nine state-of-the-art traditional and deep learning methods on three different datasets through subjective observations and objective comparisons, and the experimental results manifest that the proposed SwinFuse obtains surprising fusion performance with strong generalization ability and competitive computational efficiency. The code will be available at https://github.com/Zhishe-Wang/SwinFuse.
科研通智能强力驱动
Strongly Powered by AbleSci AI