鉴别器
人工智能
计算机科学
发电机(电路理论)
计算机视觉
模式识别(心理学)
图像(数学)
分歧(语言学)
像素
图像融合
功率(物理)
电信
探测器
物理
哲学
量子力学
语言学
作者
Han Xu,Pengwei Liang,Wei Yu,Junjun Jiang,Jiayi Ma
标识
DOI:10.24963/ijcai.2019/549
摘要
In this paper, we propose a new end-to-end model, called dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Unlike the pixel-level methods and existing deep learning-based methods, the fusion task is accomplished through the adversarial process between a generator and two discriminators, in addition to the specially designed content loss. The generator is trained to generate real-like fused images to fool discriminators. The two discriminators are trained to calculate the JS divergence between the probability distribution of downsampled fused images and infrared images, and the JS divergence between the probability distribution of gradients of fused images and gradients of visible images, respectively. Thus, the fused images can compensate for the features that are not constrained by the single content loss. Consequently, the prominence of thermal targets in the infrared image and the texture details in the visible image can be preserved or even enhanced in the fused image simultaneously. Moreover, by constraining and distinguishing between the downsampled fused image and the low-resolution infrared image, DDcGAN can be preferably applied to the fusion of different resolution images. Qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our method over the state-of-the-art.
科研通智能强力驱动
Strongly Powered by AbleSci AI