LiDAR sensors are widely used in many safety-critical applications such as autonomous driving and drone control, and the collected data called point clouds are subsequently processed by 3D object detectors for visual perception. Recent works have shown that attackers can inject virtual points into LiDAR sensors by strategically transmitting laser pulses to them; additionally, deep visual models have been found to be vulnerable to carefully crafted adversarial examples. Therefore, a LiDAR-based perception may be maliciously attacked with serious safety consequences. In this paper, we present a highly-deceptive adversarial obstacle generation algorithm against deep 3D detection models, to mimic fake obstacles within the effective detection range of LiDAR using a limited number of points. To achieve this goal, we first perform a physical LiDAR simulation to construct sparse obstacle point clouds. Then, we devise a strong attack strategy to adversarially perturb prototype points along each direction of the ray. Our method achieves a high attack success rate while complying with physical laws at the hardware level. We perform comprehensive experiments on different types of 3D detectors and determine that the voxel-based detectors are more vulnerable to adversarial attacks than the point-based methods. For example, our approach can achieves an 89% mean attack success rate against PV-RCNN by using only 20 points to spoof a fake car.