作者
Minkyung Kim,H. Seo,Songhyeon Kim,Jung-Hoon Chun,Seong‐Jin Kim,Jaehyuk Choi
摘要
Light detection and ranging (LiDAR) systems measure distance with high depth resolution while providing images with the shape of objects. For long-range detection, emitting a concentrated laser is essential. This, in turn, necessitates the use of a laser scanning device to achieve high spatial resolution. Traditional mechanical LiDAR systems, using 1-D SPAD sensors, need devices like polygon or MEMS mirrors for high-resolution imaging, yet they are bulky, expensive, and not easily manufactured [1]. Recently, a solid-state LiDAR system, equipped with a VCSEL array as a transmitter and a CMOS depth sensor as a receiver, has emerged. This system can measure direct time-of-flight (dTOF) with a row of pixels, facilitating a column-parallel implementation with time-to-digital converters (TDCs), histogram memory, and processors. However, long-range detection with high depth resolution demands extensive histogram memory, restricting spatial and depth resolutions. One approach to reducing memory involves the use of a 2-step TDC [2]. However, it is worth noting that the necessary memory capacity per column is substantial, reaching up to 384b to implement a 9b column-parallel TDC. Another challenge in improving spatial resolution is the pixel area that is taken for the SPAD analog front-end (AFE) circuit. Unlike a SPAD AFE circuit for photon-counting imager [3], the AFE circuit for LiDAR sensors includes delay cells and active recharging circuits to minimize dead time. These components typically entail more than 10 transistors, including pMOS transistors. Additionally, when pixel selection/masking memory and additional logic elements are introduced within the pixel, the fill factor can significantly degrade in non-stack processes, or the pixel pitch may become constrained in 3D stack processes, even if the SPAD is scaled down to dimensions of a few micrometers.