鉴别器
融合
发电机(电路理论)
图像融合
对偶(语法数字)
人工智能
计算机科学
计算机视觉
图像(数学)
光学(聚焦)
源代码
代表(政治)
模式识别(心理学)
功率(物理)
物理
光学
电信
探测器
文学类
哲学
语言学
艺术
量子力学
政治
法学
政治学
操作系统
作者
Le Chang,Yongdong Huang,Qiufu Li,Yuduo Zhang,Lijun Liu,Qingjian Zhou
出处
期刊:Neurocomputing
[Elsevier BV]
日期:2024-02-06
卷期号:578: 127391-127391
被引量:11
标识
DOI:10.1016/j.neucom.2024.127391
摘要
Existing infrared and visible image fusion techniques based on generative adversarial networks (GAN) generally disregard local and texture detail features, which tend to limit the fusion performance. Therefore, we propose a GAN model based on dual fusion paths and a U-type discriminator, denoted as DUGAN. Specifically, the image and gradient paths are integrated into the generator to fully extract the content and texture detail features from the source images and their corresponding gradient images. This incorporation aids the generator in generating fusion results with rich information by integrating output features of dual fusion paths. In addition, we construct a U-type discriminator to focus on input images' global and local information, which drives the network to generate fusion results visually consistent with the source images. Furthermore, we integrate attention blocks in the discriminator to improve the representation of salient information. Experimental results demonstrate that DUGAN has better performance in qualitative and quantitative evaluation compared with other state-of-the-art methods. The source code has been released at https://github.com/chang-le-11/DUGAN.
科研通智能强力驱动
Strongly Powered by AbleSci AI