点云
聚类分析
最小边界框
人工智能
目标检测
计算机科学
计算机视觉
跳跃式监视
帧(网络)
点(几何)
移动机器人
对象(语法)
云数据库
激光雷达
分割
云计算
图像(数学)
机器人
数学
遥感
操作系统
地质学
电信
几何学
作者
Xiaobin Xu,Lei Zhang,Jian Yang,Chenfei Cao,Zhiying Tan,Minzhou Luo
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:70: 1-12
被引量:17
标识
DOI:10.1109/tim.2021.3102739
摘要
With the rapid development of mobile robots, environmental perception based on a single sensor can hardly meet the task requirements of the robots for object detection and path planning in complex scenarios. In this article, an object detection fusion algorithm based on both the information of the LiDAR point cloud and the camera image is proposed. First, YOLOv4 is used to detect the objects in the image. Then, the point cloud is projected into the image, and the target point cloud is filtered out according to the range of the 2-D detection frame. The target point cloud is used to perform density clustering and generate the output of a bounding box with semantic labels. Meanwhile, the original point cloud data are processed by an improved four-neighborhood clustering algorithm based on the Euclidean distance and angle threshold to generate the output of another bounding box without semantic labels. Finally, the clusters obtained by different methods are fused and judged to produce the output of the final object detection results. The test using the KITTI dataset shows that the accuracy of the improved four-neighbor clustering algorithm is increased to 0.835. The final semantic segmentation results have an average positioning error of 0.033 and 0.073 m in the $x$ - and $y$ -directions. The average angular error of the vehicle direction is 0.90°. Compared with the other two types of point cloud segmentation networks, our approach has the highest accuracy and sufficient real-time performance, which can reach 9.96 Hz in the experiment.
科研通智能强力驱动
Strongly Powered by AbleSci AI