期刊:Journal of Intelligent and Fuzzy Systems [IOS Press] 日期:2022-04-29卷期号:43 (5): 5771-5782
标识
DOI:10.3233/jifs-213074
摘要
Most of the deep learning object detection methods based on multi-modal information fusion cannot directly control the quality of the fused images at present, because the fusion only depends on the detection results. The indirectness of control is not conducive to the target detection of the network in principle. For the sake of the problem, we propose a multimodal information cross-fusion detection method based on a generative adversarial network (CrossGAN-Detection), which is composed of GAN and a target detection network. And the target detection network acts as the second discriminator of GAN during training. Through the content loss function and dual discriminator, directly controllable guidance is provided for the generator, which is designed to learn the relationship between different modes adaptively through cross fusion. We conduct abundant experiments on the KITTI dataset, which is the prevalent dataset in the fusion-detection field. The experimental results show that the AP of the novel method for vehicle detection achieves 96.66%, 87.15%, and 78.46% in easy, moderate, and hard categories respectively, which is improved about 7% compared to the state-of-art methods.