人工智能
计算机科学
特征提取
突出
图像融合
发电机(电路理论)
计算机视觉
模式识别(心理学)
特征(语言学)
融合
图像(数学)
功率(物理)
语言学
物理
哲学
量子力学
作者
Shuying Huang,Zixiang Song,Yong Yang,Weiguo Wan,Xiangkai Kong
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:72: 1-14
被引量:5
标识
DOI:10.1109/tim.2023.3282300
摘要
Deep learning has been widely used in infrared and visible image fusion owing to its strong feature extraction and generalization capabilities. However, it is difficult to directly extract specific image features from different modal images. Therefore, according to the characteristics of infrared and visible images, this paper proposes a multi-attention generative adversarial network (MAGAN) for infrared and visible image fusion, which is composed of a multi- attention generator and two multi-attention discriminators. The multi-attention generator gradually realizes the extraction and fusion of image features by constructing two modules: a triple-path feature pre-fusion module (TFPM) and a feature emphasis fusion module (FEFM). The two multi-attention discriminators are constructed to ensure that the fused images retain the salient targets and the texture information from the source images. In MAGAN, an intensity attention and a texture attention are designed to extract the specific features of the source images to retain more intensity and texture information in the fused image. In addition, a saliency target intensity loss is defined to ensure that the fused images obtain more accurate salient information from infrared images. Experimental results on two public datasets show that the proposed MAGAN outperforms some state-of-the-art models in terms of visual effects and quantitative metrics.
科研通智能强力驱动
Strongly Powered by AbleSci AI