计算机科学
人工智能
源代码
图像融合
变压器
像素
计算机视觉
特征提取
卷积神经网络
模式识别(心理学)
图像(数学)
量子力学
操作系统
物理
电压
作者
Wei Tang,Fazhi He,Yu Liu,Yansong Duan,Tongzhen Si
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2023-07-01
卷期号:33 (7): 3159-3172
被引量:48
标识
DOI:10.1109/tcsvt.2023.3234340
摘要
The fusion of infrared and visible images aims to generate a composite image that can simultaneously contain the thermal radiation information of an infrared image and the plentiful texture details of a visible image to detect targets under various weather conditions with a high spatial resolution of scenes. Previous deep fusion models were generally based on convolutional operations, resulting in a limited ability to represent long-range context information. In this paper, we propose a novel end-to-end model for infrared and visible image fusion via a dual attention Transformer termed DATFuse. To accurately examine the significant areas of the source images, a dual attention residual module (DARM) is designed for important feature extraction. To further model long-range dependencies, a Transformer module (TRM) is devised for global complementary information preservation. Moreover, a loss function that consists of three terms, namely, pixel loss, gradient loss, and structural loss, is designed to train the proposed model in an unsupervised manner. This can avoid manually designing complicated activity-level measurement and fusion strategies in traditional image fusion methods. Extensive experiments on public datasets reveal that our DATFuse outperforms other representative state-of-the-art approaches in both qualitative and quantitative assessments. The proposed model is also extended to address other infrared and visible image fusion tasks without fine-tuning, and the promising results demonstrate that it has good generalization ability. The source code is available at https://github.com/tthinking/DATFuse .
科研通智能强力驱动
Strongly Powered by AbleSci AI