计算机科学
人工智能
帧(网络)
计算机视觉
事件(粒子物理)
跳跃式监视
目标检测
交叉熵
传感器融合
特征(语言学)
高斯分布
混合模型
模式识别(心理学)
量子力学
电信
语言学
哲学
物理
作者
Mengyun Liu,Na Qi,Yong Shi,Baocai Yin
标识
DOI:10.1109/icip42928.2021.9506561
摘要
Under the extreme conditions such as excessive light, insufficient light or high-speed motion, the detection of vehicles by frame-based cameras still has challenges. Event cameras can capture the frame and event data asynchronously, which is of great help to address the object detection under the aforementioned extreme condition. We propose a fusion network with Attention Fusion module for vehicle object detection by jointly utilizing the features of both frame and event data. The frame and event data are separately fed into the symmetric framework based on Gaussian YOLOv3 to model the bounding box (bbox) coordinates of YOLOv3 as the Gaussian parameters and predict the localization uncertainty of bbox with a redesigned cross-entropy loss function of bbox. The feature maps of these Gaussian parameter and confidence map in each layer are deeply fused in the Attention Fusion module. Finally, the feature maps of the frame and event data are concatenated to the detection layer to improve the detection accuracy. The experimental results show that the method presented in this paper outperforms the state-of-the-art methods only using the traditional frame-based network and the joint network combining the event and frame information.
科研通智能强力驱动
Strongly Powered by AbleSci AI