计算机科学
人工智能
模式识别(心理学)
图像融合
规范化(社会学)
计算机视觉
融合
代表(政治)
编码器
特征提取
特征(语言学)
图像(数学)
哲学
社会学
操作系统
政治
语言学
法学
人类学
政治学
作者
Yuan Gao,Shiwei Ma,Jingjing Liu
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2022-09-15
卷期号:33 (2): 549-561
被引量:27
标识
DOI:10.1109/tcsvt.2022.3206807
摘要
This paper proposes a new infrared and visible image fusion method based on the densely connected disentangled representation generative adversarial network (DCDR-GAN), which strips the content and the modal features of infrared and visible images through disentangled representation (DR) and fuses them separately. To deal with the mutually exclusive features in infrared and visible images, inject the modal features into the reconstruction of content features through adaptive instance normalization (AdaIN), reducing the interference. To reduce feature loss and ensure the expression of all-level features in the fused image, DCDR-GAN designs the densely connected content encoders and fusion decoder and constructs the multi-scale fusion structures between the enc-dec through long connections. Meanwhile, the content and the modal reconstruction losses are proposed to preserve the information of the source images. Finally, through the two-phase trained model, generate the fused image. The subjective and objective evaluation results of the TNO and INO datasets show that the proposed method has better visual effects and higher index values than other state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI