自编码
计算机科学
湍流
人工智能
计算
失真(音乐)
算法
计算机视觉
人工神经网络
物理
计算机网络
热力学
放大器
带宽(计算)
作者
Gregor Franz,Daniel Wegner,Joshué Pérez,Stefan Keßler
摘要
Atmospheric turbulence often limits the performance of long-range imaging systems in applications. Realistic turbulence simulations provide means to evaluate this effect and assess turbulence mitigation algorithms. Current methods typically use phase screens or turbulent point spread functions (PSFs) to simulate the image distortion and blur due to turbulence. While the first method takes long computation times, the latter requires empirical models or libraries of PSF shapes and their associated tip and tilt motion, which might be overly simplistic for some applications. In this work, an approach is evaluated which tries to avoid these issues. Generative neural networks models are able to generate extremely realistic imitations of real (image) data with a short calculation time. To treat anisoplanatic imaging for the considered application, the model output is an imitation PSF-grid that has to be applied to the input image to yield the turbulent image. Certain shape features of the model outcome can be controlled by traversing within subsets of the model input space or latent space. The use of a conditional variational autoencoder (cVAE) appears very promising to yield fast computation times and realistic PSFs and is therefore examined in this work. The cVAE is trained on field trial camera images of a remote LED array. These images are considered as grids of real PSFs. First the images are pre-processed and their PSFs properties are determined for each frame. Main goal of the cVAE is the generation of PSF-grids under conditional properties, e.g. moments of PSFs. Different approaches are discussed and employed for a qualitative evaluation of the realism of the PSF-grids generated by the trained models. A comparison of required simulation computing time is presented and further considerations regarding the simulation method are discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI