随机性
人工智能
计算机科学
过程(计算)
代表(政治)
湍流
对象(语法)
计算机视觉
模式识别(心理学)
算法
物理
气象学
数学
操作系统
统计
政治
法学
政治学
作者
Darui Jin,Ying Chen,Yi Lu,Junzhang Chen,Peng Wang,Zichao Liu,Sheng Guo,Xiangzhi Bai
标识
DOI:10.1038/s42256-021-00392-1
摘要
A turbulent medium with eddies of different scales gives rise to fluctuations in the index of refraction during the process of wave propagation, which interferes with the original spatial relationship, phase relationship and optical path. The outputs of two-dimensional imaging systems suffer from anamorphosis brought about by this effect. Randomness, along with multiple types of degradation, make it a challenging task to analyse the reciprocal physical process. Here, we present a generative adversarial network (TSR-WGAN), which integrates temporal and spatial information embedded in the three-dimensional input to learn the representation of the residual between the observed and latent ideal data. Vision-friendly and credible sequences are produced without extra assumptions on the scale and strength of turbulence. The capability of TSR-WGAN is demonstrated through tests on our dataset, which contains 27,458 sequences with 411,870 frames of algorithm simulated data, physical simulated data and real data. TSR-WGAN exhibits promising visual quality and a deep understanding of the disparity between random perturbations and object movements. These preliminary results also shed light on the potential of deep learning to parse stochastic physical processes from particular perspectives and to solve complicated image reconstruction problems given limited data. Turbulent optical distortions in the atmosphere limit the ability of optical technologies such as laser communication and long-distance environmental monitoring. A new method using adversarial networks learns to counter the physical processes underlying the turbulence so that complex optical scenes can be reconstructed.
科研通智能强力驱动
Strongly Powered by AbleSci AI