人工神经网络
计算机科学
架空(工程)
随机神经网络
人工智能
时滞神经网络
操作系统
作者
Yifan Wang,Linlin Zhong
标识
DOI:10.1016/j.jcp.2023.112603
摘要
Physics-informed neural network (PINN) has been a prevalent framework for solving PDEs since proposed. By incorporating the physical information into the neural network through loss functions, it can predict solutions to PDEs in an unsupervised manner. However, the design of the neural network structure basically relies on prior knowledge and experience, which has caused great trouble and high computational overhead. Therefore, we propose a neural architecture search-guided method, namely NAS-PINN, to automatically search the optimum neural architecture for solving certain PDEs. By relaxing the search space into a continuous one and utilizing masks to realize the addition of tensors in different shapes, NAS-PINN can be trained through a bi-level optimization, where the inner loop optimizes the weights and bias of neural networks and the outer loop the architecture parameters. We verify the ability of NAS-PINN by several numerical experiments including Poisson, Burgers, and Advection equations. The characteristics of effective neural architectures for solving different PDEs are summarized, which can be used to guide the design of neural networks in PINN. It is found that more hidden layers do not necessarily mean better performance and sometimes can be harmful. Especially for Poisson and Advection, a shallow neural network with more neurons is more appropriate in PINNs. It is also indicated that for complex problems, neural networks with residual connection can improve the performance of PINNs.
科研通智能强力驱动
Strongly Powered by AbleSci AI