突出
保险丝(电气)
计算机科学
变压器
人工智能
信息融合
融合
计算机视觉
像素
图像融合
模式识别(心理学)
图像(数学)
工程类
语言学
哲学
电压
电气工程
作者
Lihua Jian,Songlei Xiong,Han Yan,Xiaoguang Niu,Shaowu Wu,Di Zhang
出处
期刊:Cornell University - arXiv
日期:2024-01-01
被引量:1
标识
DOI:10.48550/arxiv.2401.11675
摘要
The salient information of an infrared image and the abundant texture of a visible image can be fused to obtain a comprehensive image. As can be known, the current fusion methods based on Transformer techniques for infrared and visible (IV) images have exhibited promising performance. However, the attention mechanism of the previous Transformer-based methods was prone to extract common information from source images without considering the discrepancy information, which limited fusion performance. In this paper, by reevaluating the cross-attention mechanism, we propose an alternate Transformer fusion network (ATFuse) to fuse IV images. Our ATFuse consists of one discrepancy information injection module (DIIM) and two alternate common information injection modules (ACIIM). The DIIM is designed by modifying the vanilla cross-attention mechanism, which can promote the extraction of the discrepancy information of the source images. Meanwhile, the ACIIM is devised by alternately using the vanilla cross-attention mechanism, which can fully mine common information and integrate long dependencies. Moreover, the successful training of ATFuse is facilitated by a proposed segmented pixel loss function, which provides a good trade-off for texture detail and salient structure preservation. The qualitative and quantitative results on public datasets indicate our ATFFuse is effective and superior compared to other state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI