偏微分方程
离散化
人工神经网络
非线性系统
维数之咒
卷积神经网络
反问题
应用数学
计算机科学
趋同(经济学)
数学
人工智能
数学分析
物理
量子力学
经济
经济增长
作者
Biao Yuan,He Wang,Ana Heitor,Xiaohui Chen
标识
DOI:10.1016/j.jcp.2024.113284
摘要
The physics and interdisciplinary problems in science and engineering are mainly described as partial differential equations (PDEs). Recently, a novel method using physics-informed neural networks (PINNs) to solve PDEs by employing deep neural networks with physical constraints as data-driven models has been pioneered for surrogate modelling and inverse problems. However, the original PINNs based on fully connected neural networks pose intrinsic limitations and poor performance for the PDEs with nonlinearity, drastic gradients, multiscale characteristics or high dimensionality in which the complex features are hard to capture. This leads to difficulties in convergence to correct solutions and high computational costs. To address the above problems, in this paper, a novel physics-informed convolutional neural network framework based on finite discretization schemes with a stack of a series of nonlinear convolutional units (NCUs) for solving PDEs in the space-time domain without any labelled data (f-PICNN) is proposed, in which the memory mechanism can considerably speed up the convergence. Specifically, the initial conditions (ICs) are hard-encoded into the network as the first time-step solution and used to extrapolate the next time-step solution. The Dirichlet boundary conditions (BCs) are constrained by soft BC enforcement while the Neumann BCs are hard enforced. Furthermore, the loss function is designed as a set of discretized PDE residuals and optimized to conform to physics laws. Finally, the proposed auto-regressive model has been proven to be effective in a wide range of 1D and 2D nonlinear PDEs in both space and time under different finite discretization schemes (e.g., Euler, Crank Nicolson and fourth-order Runge-Kutta). The numerical results demonstrate that the proposed framework not only shows the ability to learn the PDEs efficiently but also provides an opportunity for greater conceptual simplicity, and potential for extrapolation from learning the PDEs using a limited dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI