梯度下降
计算机科学
期限(时间)
人工智能
随机梯度下降算法
人工神经网络
循环神经网络
深度学习
机器学习
面子(社会学概念)
模式识别(心理学)
社会科学
物理
量子力学
社会学
作者
Yoshua Bengio,P. Simard,Paolo Frasconi
摘要
Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered.
科研通智能强力驱动
Strongly Powered by AbleSci AI