主成分分析
线性子空间
最大值和最小值
人工神经网络
投影(关系代数)
子空间拓扑
计算机科学
数学
算法
鞍点
人工智能
功能(生物学)
基质(化学分析)
模式识别(心理学)
数学优化
纯数学
数学分析
生物
复合材料
进化生物学
材料科学
几何学
作者
Pierre Baldi,Kurt Hornik
标识
DOI:10.1016/0893-6080(89)90014-2
摘要
We consider the problem of learning from examples in layered linear feed-forward neural networks using optimization methods, such as back propagation, with respect to the usual quadratic error function E of the connection weights. Our main result is a complete description of the landscape attached to E in terms of principal component analysis. We show that E has a unique minimum corresponding to the projection onto the subspace generated by the first principal vectors of a covariance matrix associated with the training patterns. All the additional critical points of E are saddle points (corresponding to projections onto subspaces generated by higher order vectors). The auto-associative case is examined in detail. Extensions and implications for the learning algorithms are discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI