生成语法
动力学(音乐)
生成模型
计算机科学
人工智能
心理学
教育学
作者
Yang Lijun,Zeyu Li,Han Wang,Yue Zhang,Qingfei Fu,Jingxuan Li,Li-zi Qin,Ruo‐Yu Dong,Hao Sun,Yue Deng
出处
期刊:Research Square - Research Square
日期:2024-05-09
标识
DOI:10.21203/rs.3.rs-4183330/v1
摘要
Abstract Reconstructing spatiotemporal dynamics with sparse sensor measurement is an outstanding problem, commonly encountered in a wide spectrum of scientific and engineering applications. Such a problem is particularly challenging when the number and/or types of sensors (e.g., randomly placed) are extremely insufficient. Existing end-to-end learning models ordinarily suffer from the generalization issue for full-field reconstruction of spatiotemporal dynamics, especially in sparse data regimes typically seen in real-world applications. To this end, we propose a sparse-sensor-assisted score-based generative model (S3GM) to reconstruct and predict full-field spatiotemporal dynamics based on sparse measurements. Instead of learning directly the mapping between input and output pairs, an unconditioned generative model is firstly pretrained capturing the joint distribution of a vast group of pretraining data in a self-supervised manner, followed then by a sampling process conditioned on unseen sparse measurement. The efficacy of S3GM has been verified on multiple dynamical systems with various synthetic, real-world, and lab-test datasets (ranging from turbulent flow modeling to weather/climate forecasting). The results demonstrate the excellent performance of S3GM in zero-shot reconstruction and prediction of spatiotemporal dynamics even with high levels of data sparsity and noise. We find that S3GM exhibits high accuracy, generalizability, and robustness when handling different reconstruction tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI