计算机科学
编码器
任务(项目管理)
图像(数学)
图像融合
人工智能
功能(生物学)
解码方法
计算机视觉
模式识别(心理学)
算法
工程类
进化生物学
生物
操作系统
系统工程
作者
Zhuoxiao Li,Jinyuan Liu,Risheng Liu,Xin Fan,Zhongxuan Luo,Wen Gao
标识
DOI:10.1109/icme51207.2021.9428212
摘要
Image fusion methods have achieved incredible progress, but they are vulnerable to handling a certain type of fusion task rather than considering deeper relations between cross-realm task correlations. To achieve this, we integrate different image fusion tasks into a unified network. Our method is accomplished through multiple task-oriented encoders and a generic decoder, in addition to a self-adapting loss function. The taskoriented encoders are trained to learn task-specific features, while the generic decoder reconstructs the fused features to generate a comprehensive image. Subsequently, by introducing the self-adapting loss in our method, it can automatically adjust itself to source data characteristics on different tasks. Besides, we formulate a training strategy based on bilevel optimization to update the multi-encoder and generic decoder in an alternative manner. Extensive experimental results demonstrate the superior performance of our method over the stateof-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI