激光雷达
计算机科学
点云
障碍物
目标检测
人工智能
计算机视觉
对抗制
无人机
探测器
点(几何)
对象(语法)
构造(python库)
遥感
模式识别(心理学)
生物
法学
程序设计语言
地质学
几何学
电信
遗传学
数学
政治学
作者
Jian Wang,Fan Li,Xuchong Zhang,Hongbin Sun
标识
DOI:10.1109/tmm.2023.3302018
摘要
LiDAR sensors are widely used in many safety-critical applications such as autonomous driving and drone control, and the collected data called point clouds are subsequently processed by 3D object detectors for visual perception. Recent works have shown that attackers can inject virtual points into LiDAR sensors by strategically transmitting laser pulses to them; additionally, deep visual models have been found to be vulnerable to carefully crafted adversarial examples. Therefore, a LiDAR-based perception may be maliciously attacked with serious safety consequences. In this paper, we present a highly-deceptive adversarial obstacle generation algorithm against deep 3D detection models, to mimic fake obstacles within the effective detection range of LiDAR using a limited number of points. To achieve this goal, we first perform a physical LiDAR simulation to construct sparse obstacle point clouds. Then, we devise a strong attack strategy to adversarially perturb prototype points along each direction of the ray. Our method achieves a high attack success rate while complying with physical laws at the hardware level. We perform comprehensive experiments on different types of 3D detectors and determine that the voxel-based detectors are more vulnerable to adversarial attacks than the point-based methods. For example, our approach can achieves an 89% mean attack success rate against PV-RCNN by using only 20 points to spoof a fake car.
科研通智能强力驱动
Strongly Powered by AbleSci AI