计算机科学
鉴别器
人工智能
图像(数学)
任务(项目管理)
遗忘
图像融合
发电机(电路理论)
生成语法
对抗制
光学(聚焦)
机器学习
深度学习
融合
模式识别(心理学)
语言学
哲学
物理
管理
量子力学
探测器
光学
经济
电信
功率(物理)
作者
Zhuliang Le,Jun Huang,Han Xu,Fan Fan,Yong Ma,Xiaoguang Mei,Jiayi Ma
标识
DOI:10.1016/j.inffus.2022.07.013
摘要
In this paper, we propose a novel unsupervised continual-learning generative adversarial network for unified image fusion, termed as UIFGAN. In our model, for multiple image fusion tasks, a generative adversarial network for training a single model with memory in a continual-learning manner is proposed, rather than training an individual model for each fusion task or jointly training multiple tasks. We use elastic weight consolidation to avoid forgetting what has been learned from previous tasks when training multiple tasks sequentially. In each task, the generation of the fused image comes from the adversarial learning between a generator and a discriminator. Meanwhile, a max-gradient loss function is adopted for forcing the fused image to obtain richer texture details of the corresponding regions in two source images, which applies to most typical image fusion tasks. Extensive experiments on multi-exposure, multi-modal and multi-focus image fusion tasks demonstrate the advantages of our method over the state-of-the-art approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI