一般化
人工神经网络
函数逼近
计算机科学
非线性系统
操作员(生物学)
算符理论
深度学习
功能(生物学)
数学
人工智能
离散数学
数学分析
生物化学
量子力学
进化生物学
转录因子
生物
基因
物理
抑制因子
化学
作者
Lu Lu,Pengzhan Jin,Guofei Pang,Zhongqiang Zhang,George Em Karniadakis
标识
DOI:10.1038/s42256-021-00302-5
摘要
It is widely known that neural networks (NNs) are universal approximators of continuous functions. However, a less known but powerful result is that a NN with a single hidden layer can accurately approximate any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the structure and potential of deep neural networks (DNNs) in learning continuous operators or complex systems from streams of scattered data. Here, we thus extend this theorem to DNNs. We design a new network with small generalization error, the deep operator network (DeepONet), which consists of a DNN for encoding the discrete input function space (branch net) and another DNN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, such as integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. We study different formulations of the input function space and its effect on the generalization error for 16 different diverse applications. Neural networks are known as universal approximators of continuous functions, but they can also approximate any mathematical operator (mapping a function to another function), which is an important capability for complex systems such as robotics control. A new deep neural network called DeepONet can lean various mathematical operators with small generalization error.
科研通智能强力驱动
Strongly Powered by AbleSci AI