计算机科学
点云
激光雷达
人工智能
雷达
计算机视觉
背景(考古学)
目标检测
雷达成像
传感器融合
遥感
模式识别(心理学)
地理
电信
考古
作者
Lili Fan,Changxian Zeng,Yunjie Li,Xu Wang,Dongpu Cao
出处
期刊:SAE technical paper series
日期:2023-12-31
被引量:2
摘要
<div class="section abstract"><div class="htmlview paragraph">The fusion of multi-modal perception in autonomous driving plays a pivotal role in vehicle behavior decision-making. However, much of the previous research has predominantly focused on the fusion of Lidar and cameras. Although Lidar offers an ample supply of point cloud data, its high cost and the substantial volume of point cloud data can lead to computational delays. Consequently, investigating perception fusion under the context of 4D millimeter-wave radar is of paramount importance for cost reduction and enhanced safety. Nevertheless, 4D millimeter-wave radar faces challenges including sparse point clouds, limited information content, and a lack of fusion strategies. In this paper, we introduce, for the first time, an approach that leverages Graph Neural Networks to assist in expressing features from 4D millimeter-wave radar point clouds. This approach effectively extracts unstructured point cloud features, addressing the loss of object detection due to sparsity. Additionally, we propose the Multi-Modal Fusion Module (MMFM), which aligns and fuses features from graphs, radar pseudo-images generated from Pillars, and camera images within a geometric space. We validate our model using the View-of-Delft (VoD) dataset. Experimental results demonstrate that the proposed method efficiently fuses camera and 4D radar features, resulting in enhanced 3D detection performance.</div></div>
科研通智能强力驱动
Strongly Powered by AbleSci AI