压缩传感
计算机科学
奈奎斯特率
帧速率
吞吐量
采样(信号处理)
时域
滤波器(信号处理)
奈奎斯特-香农抽样定理
光学
信号(编程语言)
频域
传输(电信)
噪音(视频)
图像质量
算法
物理
计算机视觉
电信
图像(数学)
程序设计语言
无线
作者
Rubing Li,Yueyun Weng,Siyuan Lin,Chao Wei,Liye Mei,Shubin Wei,Yifan Yao,Fuling Zhou,Du Wang,Keisuke Goda,Cheng Lei
出处
期刊:ACS Photonics
[American Chemical Society]
日期:2023-02-17
卷期号:10 (7): 2399-2406
被引量:5
标识
DOI:10.1021/acsphotonics.2c01708
摘要
Optical time-stretch (OTS) imaging has shown significant advantages in many applications, such as high-throughput cell screening, for its high frame rate and the capability of continuous image acquisition. Unfortunately, its application in practice is fundamentally limited by the pressure on data transmission and storage caused by the extreme throughput. Compressed sensing (CS) has been considered a promising approach to solve this problem by recovering the signal at a sampling rate significantly lower than the Nyquist sampling rate. However, the temporal stretch and compression processes, which are currently realized with two sections of dispersive media with complementary GVD, make the system complex and introduce notable power loss to the system, leading to a low signal-to-noise ratio (SNR). In this work, we propose and demonstrate all-optical Fourier-domain-compressed time-stretch imaging with low-pass filtering. Only one section of dispersive media is needed to temporal stretch the pulse, while the compression of the stretched pulses is achieved with low-pass-filtering after optical-electrical conversion. Therefore, the structure of the system is notably simplified, and the SNR or the quality of the images can be improved. The principle of this method is theoretically analyzed, and its performance is experimentally demonstrated. The results show that our system can image flowing cells at a high flowing speed of 1 m/s, with a spatial resolution of 2.19 μm on condition that the original data are compressed by 80%. Our method can boost the application of OTS imaging in the fields where high-throughput and real-time imaging is required.
科研通智能强力驱动
Strongly Powered by AbleSci AI