前馈
计算机科学
前馈神经网络
人工智能
算法
应用数学
人工神经网络
数学
理论计算机科学
控制理论(社会学)
数学优化
控制(管理)
控制工程
工程类
作者
Kurt Hornik,Maxwell B. Stinchcombe,Halbert White
标识
DOI:10.1016/0893-6080(90)90005-6
摘要
Abstract We give conditions ensuring that multilayer feedforward networks with as few as a single hidden layer and an appropriately smooth hidden layer activation function are capable of arbitrarily accurate approximation to an arbitrary function and its derivatives. In fact, these networks can approximate functions that are not differentiable in the classical sense, but possess only a generalized derivative, as is the case for certain piecewise differentiable functions. The conditions imposed on the hidden layer activation function are relatively mild; the conditions imposed on the domain of the function to be approximated have practical implications. Our approximation results provide a previously missing theoretical justification for the use of multilayer feedforward networks in applications requiring simultaneous approximation of a function and its derivatives.
科研通智能强力驱动
Strongly Powered by AbleSci AI