斑点图案
计算机科学
图像(数学)
人工智能
传输(计算)
计算机视觉
并行计算
作者
He Zhao,Yanzhu Zhang,Hao Wu,Jixiong Pu
出处
期刊:Physica Scripta
[IOP Publishing]
日期:2024-03-25
卷期号:99 (5): 056003-056003
标识
DOI:10.1088/1402-4896/ad37aa
摘要
Abstract In recent years, convolutional neural networks (CNNs) have been successfully applied to reconstruct images from speckle patterns generated as objects pass through scattering media. To achieve this objective, a large amount of data is collected for training the CNN. However, in certain cases, the characteristics of light passing through the scattering medium may vary. In such situations, it is necessary to collect a substantial amount of new data to re-train the CNN and achieve image reconstruction. To address this challenge, transfer learning techniques are introduced in this study. Specifically, we propose a novel Residual U-Net Generative Adversarial Network, denoted as ResU-GAN. The network is initially pre-trained using a large amount of data collected from either visible or non-visible light sources, and subsequently fine-tuned using a small amount of data collected from non-visible or visible light sources. Experimental results demonstrate the outstanding reconstruction performance of the ResU-GAN network. Furthermore, by combining transfer learning techniques, the network enables the reconstruction of speckle images across different datasets. The findings presented in this paper provide a more generalized approach for utilizing CNNs in cross-spectral speckle imaging.
科研通智能强力驱动
Strongly Powered by AbleSci AI