循环神经网络
计算机科学
连接主义
语音识别
胆小的
人工智能
深度学习
背景(考古学)
水准点(测量)
时滞神经网络
人工神经网络
隐马尔可夫模型
模式识别(心理学)
生物
古生物学
地理
大地测量学
作者
Alex Graves,Abdelrahman Mohamed,Geoffrey E. Hinton
标识
DOI:10.1109/icassp.2013.6638947
摘要
Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.
科研通智能强力驱动
Strongly Powered by AbleSci AI