极限学习机
前馈神经网络
计算机科学
人工神经网络
瓶颈
一般化
标杆管理
人工智能
深度学习
钥匙(锁)
人工神经网络的类型
机器学习
前馈
算法
时滞神经网络
数学
工程类
数学分析
计算机安全
营销
控制工程
业务
嵌入式系统
作者
Guang-Bin Huang,Qin‐Yu Zhu,Chee‐Kheong Siew
出处
期刊:International Joint Conference on Neural Network
日期:2005-04-05
被引量:3570
标识
DOI:10.1109/ijcnn.2004.1380068
摘要
It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: 1) the slow gradient-based learning algorithms are extensively used to train neural networks, and 2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these traditional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses the input weights and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide the best generalization performance at extremely fast learning speed. The experimental results based on real-world benchmarking function approximation and classification problems including large complex applications show that the new algorithm can produce best generalization performance in some cases and can learn much faster than traditional popular learning algorithms for feedforward neural networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI