计算机科学
人工智能
嵌入
特征(语言学)
目标检测
模式识别(心理学)
对象(语法)
任务(项目管理)
图像融合
特征提取
语义鸿沟
计算机视觉
红外线的
图像(数学)
图像检索
工程类
哲学
物理
系统工程
光学
语言学
作者
Wenda Zhao,Shigeng Xie,Fan Zhao,You He,Huchuan Lu
标识
DOI:10.1109/cvpr52729.2023.01341
摘要
Fusing infrared and visible images can provide more texture details for subsequent object detection task. Conversely, detection task furnishes object semantic information to improve the infrared and visible image fusion. Thus, a joint fusion and detection learning to use their mutual promotion is attracting more attention. However, the feature gap between these two different-level tasks hinders the progress. Addressing this issue, this paper proposes an infrared and visible image fusion via meta-feature embedding from object detection. The core idea is that meta-feature embedding model is designed to generate object semantic features according to fusion network ability, and thus the semantic features are naturally compatible with fusion features. It is optimized by simulating a meta learning. Moreover, we further implement a mutual promotion learning between fusion and detection tasks to improve their performances. Comprehensive experiments on three public datasets demonstrate the effectiveness of our method. Code and model are available at: https://github.com/wdzhao123/MetaFusion.
科研通智能强力驱动
Strongly Powered by AbleSci AI