作者
Yiwen Feng,Jiayang Zhao,Chuyu Wang,Lei Xie,Sanglu Lu
出处
期刊:Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies
[Association for Computing Machinery]
日期:2024-11-21
卷期号:8 (4): 1-27
摘要
Object boundary estimation, usually achieved by bounding box estimation, is crucial in various applications, such as intelligent driving, which facilitates further interactions like obstacle avoidance and navigation. Existing solutions mainly rely on computer vision, which often performs poorly in low-visibility conditions, e.g., harsh weather, and has limited resolution for depth estimation. Recent studies show the potential of mmWave radar for object detection. However, due to the inherent drawbacks, conventional mmWave techniques suffer from the severe interference of noise points in the points cloud, leading to the position vagueness, as well as sparsity and limited spatial resolution, which leads to the boundary vagueness. In this paper, we propose a novel bounding box estimation system based on mmWave radar that sufficiently leverages the spatial features of the antenna array and the temporal features of moving scanning to detect objects and estimate their 3D bounding boxes. To mitigate the interference from noise points, we introduce a new integration metric, Reflection Saliency, which evaluates the effectiveness of each point based on signal-to-noise ratio (SNR), speed, and spatial domains, successfully reducing the majority of noise points. Moreover, we propose the Prior-Time Heuristic Point Cloud Augmentation method to enrich the point representation of objects based on the previous data. To obtain boundary information, we propose a beamforming-based model to extract the Angle-Reflection Profile (ARP), which depicts the spatial distribution of the object's reflection. Furthermore, a generative neural network is used to refine the boundary and estimate the 3D bounding box by incorporating the ARP features, SNR of cloud points, and depth information. We have implemented an actual system prototype using a robot car in real scenarios and extensive experiments show that the average position error of the proposed system in 3D bounding box estimation is 0.11m.