人工智能
计算机科学
计算机视觉
单眼
稳健性(进化)
图像传感器
景深
立体成像
视野
图像分辨率
光学
物理
生物化学
化学
基因
作者
Zhexuan Cao,Ning Li,Zhu Lili,Jiamin Wu,Qionghai Dai,Hui Qiao
标识
DOI:10.1038/s41377-024-01609-9
摘要
Abstract Depth sensing plays a crucial role in various applications, including robotics, augmented reality, and autonomous driving. Monocular passive depth sensing techniques have come into their own for the cost-effectiveness and compact design, offering an alternative to the expensive and bulky active depth sensors and stereo vision systems. While the light-field camera can address the defocus ambiguity inherent in 2D cameras and achieve unambiguous depth perception, it compromises the spatial resolution and usually struggles with the effect of optical aberration. In contrast, our previously proposed meta-imaging sensor 1 has overcome such hurdles by reconciling the spatial-angular resolution trade-off and achieving the multi-site aberration correction for high-resolution imaging. Here, we present a compact meta-imaging camera and an analytical framework for the quantification of monocular depth sensing precision by calculating the Cramér–Rao lower bound of depth estimation. Quantitative evaluations reveal that the meta-imaging camera exhibits not only higher precision over a broader depth range than the light-field camera but also superior robustness against changes in signal-background ratio. Moreover, both the simulation and experimental results demonstrate that the meta-imaging camera maintains the capability of providing precise depth information even in the presence of aberrations. Showing the promising compatibility with other point-spread-function engineering methods, we anticipate that the meta-imaging camera may facilitate the advancement of monocular passive depth sensing in various applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI