代表(政治)
平滑度
Kolmogorov结构函数
功能(生物学)
人工神经网络
数学
科尔莫戈洛夫复杂性
传递函数
口译(哲学)
Kolmogorov方程(马尔可夫跳跃过程)
计算机科学
纯数学
算法
人工智能
数学分析
微分方程
政治
生物
电气工程
工程类
微分代数方程
程序设计语言
法学
进化生物学
政治学
常微分方程
作者
Johannes Schmidt-Hieber
标识
DOI:10.1016/j.neunet.2021.01.020
摘要
There is a longstanding debate whether the Kolmogorov–Arnold representation theorem can explain the use of more than one hidden layer in neural networks. The Kolmogorov–Arnold representation decomposes a multivariate function into an interior and an outer function and therefore has indeed a similar structure as a neural network with two hidden layers. But there are distinctive differences. One of the main obstacles is that the outer function depends on the represented function and can be wildly varying even if the represented function is smooth. We derive modifications of the Kolmogorov–Arnold representation that transfer smoothness properties of the represented function to the outer function and can be well approximated by ReLU networks. It appears that instead of two hidden layers, a more natural interpretation of the Kolmogorov–Arnold representation is that of a deep neural network where most of the layers are required to approximate the interior function.
科研通智能强力驱动
Strongly Powered by AbleSci AI