加权
不确定度量化
反问题
计算机科学
人工神经网络
后验概率
贝叶斯概率
数学优化
理论(学习稳定性)
人工智能
趋同(经济学)
机器学习
算法
数学
医学
放射科
数学分析
经济增长
经济
作者
Sarah Perez,Suryanarayana Maddu,Ivo F. Sbalzarini,Philippe Poncet
标识
DOI:10.1016/j.jcp.2023.112342
摘要
In this paper, we present a novel methodology for automatic adaptive weighting of Bayesian Physics-Informed Neural Networks (BPINNs), and we demonstrate that this makes it possible to robustly address multi-objective and multi-scale problems. BPINNs are a popular framework for data assimilation, combining the constraints of Uncertainty Quantification (UQ) and Partial Differential Equation (PDE). The relative weights of the BPINN target distribution terms are directly related to the inherent uncertainty in the respective learning tasks. Yet, they are usually manually set a-priori, that can lead to pathological behavior, stability concerns, and to conflicts between tasks which are obstacles that have deterred the use of BPINNs for inverse problems with multi-scale dynamics. The present weighting strategy automatically tunes the weights by considering the multi-task nature of target posterior distribution. We show that this remedies the failure modes of BPINNs and provides efficient exploration of the optimal Pareto front. This leads to better convergence and stability of BPINN training while reducing sampling bias. The determined weights moreover carry information about task uncertainties, reflecting noise levels in the data and adequacy of the PDE model. We demonstrate this in numerical experiments in Sobolev training, and compare them to analytically $\epsilon$-optimal baseline, and in a multi-scale Lokta-Volterra inverse problem. We eventually apply this framework to an inpainting task and an inverse problem, involving latent field recovery for incompressible flow in complex geometries.
科研通智能强力驱动
Strongly Powered by AbleSci AI