计算机科学
目标检测
传感器融合
探测器
人工智能
计算机视觉
最小边界框
感知
点云
稳健性(进化)
跳跃式监视
实时计算
模式识别(心理学)
电信
生物化学
化学
神经科学
基因
图像(数学)
生物
作者
Shaowu Zheng,Chong Xie,Shanhu Yu,Ming Ye,Ruyi Huang,Weihua Li
标识
DOI:10.1109/icsmd57530.2022.10058282
摘要
Roadside perception is a fundamental task for vehicle-to-road cooperative perception and traffic scheduling. However, most existing roadside perception strategies prefer to deploy sensors in a single perspective or test in a simulation environment. Due to the limited field of view covered by a single sensor, such methods usually cannot continuously detect the same object from different viewpoints or provide a wide sensing range in complex scenarios. To address these issues, a robust strategy for roadside cooperative perception based on multi-sensor fusion (RCP-MSF) is proposed in this paper. A 2D object detector is improved based on the NanoDet model to handle multiple images simultaneously. In addition, an ultra-fast 3D object detection strategy is suggested based on point cloud processing rather than relying on existing high-cost deep-learning models. Moreover, to match the 2D and 3D bounding boxes, a data association module for multi-modal sensor information fusion is presented. Any 2D and 3D object detector can follow this module. Furthermore, a roadside perception dataset named SCUT-V2R is constructed to verify the performance of the proposed method. Experiments on the dataset demonstrate that the RCP-MSF outperforms the camera-only and lidar-only strategies in object detection precision while maintaining real-time performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI