前馈
计算机科学
前馈神经网络
人工智能
人工神经网络
控制工程
工程类
标识
DOI:10.1016/0893-6080(91)90009-t
摘要
Abstract We show that standard multilayer feedforward networks with as few as a single hidden layer and arbitrary bounded and nonconstant activation function are universal approximators with respect to L p (μ) performance criteria, for arbitrary finite input environment measures μ, provided only that sufficiently many hidden units are available. If the activation function is continuous, bounded and nonconstant, then continuous mappings can be learned uniformly over compact input sets. We also give very general conditions ensuring that networks with sufficiently smooth activation functions are capable of arbitrarily accurate approximation to a function and its derivatives.
科研通智能强力驱动
Strongly Powered by AbleSci AI