数学
平滑度
维数(图论)
激活函数
人工神经网络
功能(生物学)
数学分析
三角函数
应用数学
多项式的
函数逼近
组合数学
几何学
进化生物学
生物
计算机科学
机器学习
作者
Jonathan H. Siegel,Jinchao Xu
标识
DOI:10.1016/j.acha.2021.12.005
摘要
We study the approximation properties of shallow neural networks with an activation function which is a power of the rectified linear unit. Specifically, we consider the dependence of the approximation rate on the dimension and the smoothness in the spectral Barron space of the underlying function f to be approximated. We show that as the smoothness index s of f increases, shallow neural networks with ReLUk activation function obtain an improved approximation rate up to a best possible rate of O(n−(k+1)log(n)) in L2, independent of the dimension d. The significance of this result is that the activation function ReLUk is fixed independent of the dimension, while for classical methods the degree of polynomial approximation or the smoothness of the wavelets used would have to increase in order to take advantage of the dimension dependent smoothness of f. In addition, we derive improved approximation rates for shallow neural networks with cosine activation function on the spectral Barron space. Finally, we prove lower bounds showing that the approximation rates attained are optimal under the given assumptions.
科研通智能强力驱动
Strongly Powered by AbleSci AI