编码器
计算机科学
变压器
语音识别
推论
人工智能
隐马尔可夫模型
水准点(测量)
人工神经网络
融合
端到端原则
深度学习
解码方法
模式识别(心理学)
算法
工程类
电压
地理
操作系统
哲学
电气工程
语言学
大地测量学
作者
Timo Lohrenz,Zhengyang Li,Tim Fingscheidt
标识
DOI:10.21437/interspeech.2021-555
摘要
Stream fusion, also known as system combination, is a common technique in automatic speech recognition for traditional hybrid hidden Markov model approaches, yet mostly unexplored for modern deep neural network end-to-end model architectures. Here, we investigate various fusion techniques for the all-attention-based encoder-decoder architecture known as the transformer, striving to achieve optimal fusion by investigating different fusion levels in an example single-microphone setting with fusion of standard magnitude and phase features. We introduce a novel multi-encoder learning method that performs a weighted combination of two encoder-decoder multi-head attention outputs only during training. Employing then only the magnitude feature encoder in inference, we are able to show consistent improvement on Wall Street Journal (WSJ) with language model and on Librispeech, without increase in runtime or parameters. Combining two such multi-encoder trained models by a simple late fusion in inference, we achieve state-of-the-art performance for transformer-based models on WSJ with a significant WER reduction of 19% relative compared to the current benchmark approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI