卷积神经网络
计算机科学
操作员(生物学)
补语(音乐)
超参数
数学证明
人工神经网络
算法
人工智能
深度学习
离散化
建设性的
非线性系统
数学
数学分析
生物化学
化学
物理
几何学
过程(计算)
抑制因子
量子力学
互补
转录因子
基因
表型
操作系统
作者
Nicola Rares Franco,Stefania Fresca,Andrea Manzoni,Paolo Zunino
标识
DOI:10.1016/j.neunet.2023.01.029
摘要
Recently, deep Convolutional Neural Networks (CNNs) have proven to be successful when employed in areas such as reduced order modeling of parametrized PDEs. Despite their accuracy and efficiency, the approaches available in the literature still lack a rigorous justification on their mathematical foundations. Motivated by this fact, in this paper we derive rigorous error bounds for the approximation of nonlinear operators by means of CNN models. More precisely, we address the case in which an operator maps a finite dimensional input μ∈Rp onto a functional output uμ:[0,1]d→R, and a neural network model is used to approximate a discretized version of the input-to-output map. The resulting error estimates provide a clear interpretation of the hyperparameters defining the neural network architecture. All the proofs are constructive, and they ultimately reveal a deep connection between CNNs and the Fourier transform. Finally, we complement the derived error bounds by numerical experiments that illustrate their application.
科研通智能强力驱动
Strongly Powered by AbleSci AI