融合
生成对抗网络
生成语法
计算机科学
图像融合
融合规则
前提
图像(数学)
滤波器(信号处理)
计算机视觉
人工智能
模式识别(心理学)
语言学
哲学
作者
Yujing Rao,Dan Wu,Mina Han,Ting Wang,Yang Yang,Tao Leí,Chengjiang Zhou,Haicheng Bai,Lin Xing
标识
DOI:10.1016/j.inffus.2022.12.007
摘要
Infrared and visible image fusion methods aim to combine high-intensity instances and detail texture features into fused images. However, the ability to capture compact features under various adverse conditions is limited because the distribution of these multimodal features is generally cluttered. Therefore, targeted designs are necessary to constrain multimodal features to be compact. In addition, many attempts are not robust for low-quality images under various adverse conditions and the high fusion time of most fusion methods leads to less effective subsequent vision tasks. To address these issues, we propose a generative adversarial network with intensity attention modules and semantic transition modules, termed AT-GAN, which are more efficient to extract key information from multimodal images. The intensity attention modules aim to keep infrared instance features clearly and semantic transition modules attempt to filter out noise or other redundant features in visible texture. Moreover, an adaptive fused equilibrium point can be learned by a quality assessment module. Finally, experiments with variety of datasets reveal that the AT-GAN can adaptively learn features fusion and image reconstruction synchronously and further improve the timeliness under premise of fusion superiority of the proposed method over state of the art.
科研通智能强力驱动
Strongly Powered by AbleSci AI