水准点(测量)
计算机科学
人工智能
目标检测
模态(人机交互)
保险丝(电气)
判别式
机器学习
深度学习
计算机视觉
模式识别(心理学)
大地测量学
电气工程
工程类
地理
作者
Jinyuan Liu,Xin Fan,Zhanbo Huang,Guanyao Wu,Risheng Liu,Wei Zhong,Zhongxuan Luo
标识
DOI:10.1109/cvpr52688.2022.00571
摘要
This study addresses the issue of fusing infrared and visible images that appear differently for object detection. Aiming at generating an image of high visual quality, previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks. These approaches neglect that modality differences implying the complementary information are extremely important for both fusion and subsequent detection task. This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network. The fusion network with one generator and dual discriminators seeks commons while learning from differences, which preserves structural information of targets from the infrared and textural details from the visible. Furthermore, we build a synchronized imaging system with calibrated infrared and optical sensors, and collect currently the most comprehensive benchmark covering a wide range of scenarios. Extensive experiments on several public datasets and our benchmark demonstrate that our method outputs not only visually appealing fusion but also higher detection mAP than the state-of-the-art approaches. The source code and benchmark are available at https://github.com/dlut-dimt/TarDAL.
科研通智能强力驱动
Strongly Powered by AbleSci AI