变压器
计算机科学
语音识别
编码器
序列(生物学)
解码方法
字错误率
序列标记
序列学习
人工智能
自然语言处理
任务(项目管理)
算法
电压
工程类
电气工程
操作系统
系统工程
生物
遗传学
作者
Linhao Dong,Shuang Xu,Bo Xu
标识
DOI:10.1109/icassp.2018.8462506
摘要
Recurrent sequence-to-sequence models using encoder-decoder architecture have made great progress in speech recognition task. However, they suffer from the drawback of slow training speed because the internal recurrence limits the training parallelization. In this paper, we present the Speech-Transformer, a no-recurrence sequence-to-sequence model entirely relies on attention mechanisms to learn the positional dependencies, which can be trained faster with more efficiency. We also propose a 2D-Attention mechanism, which can jointly attend to the time and frequency axes of the 2-dimensional speech inputs, thus providing more expressive representations for the Speech-Transformer. Evaluated on the Wall Street Journal (WSJ) speech recognition dataset, our best model achieves competitive word error rate (WER) of 10.9%, while the whole training process only takes 1.2 days on 1 GPU, significantly faster than the published results of recurrent sequence-to-sequence models.
科研通智能强力驱动
Strongly Powered by AbleSci AI