计算机科学
声学模型
语音识别
卷积神经网络
卷积(计算机科学)
人工智能
隐马尔可夫模型
模式识别(心理学)
特征(语言学)
人工神经网络
领域(数学分析)
特征提取
集合(抽象数据类型)
语音处理
数学分析
语言学
哲学
数学
程序设计语言
作者
Zhenye Gan,Zhenxing Kong,Min Zhang
标识
DOI:10.1109/epce58798.2023.00044
摘要
In this paper, we present an improved acoustic model CNN-DFSMN , and it uses CNN to study local frequency domain and time domain features ,and introduces skip connections between memory blocks in adjacent layers, thus alleviating the problem of gradient disappearance when building very deep structures. In recent years, the acoustic model based on Connected Temporal Classification (CTC) has achieved good performance in speech recognition. Generally, lstm-type networks are used as acoustic models in CTC. However, LSTM calculation cost is high and sometimes it is hard to train CTC criteria. This paper, Be inspired by DFSMN's work, we replace LSTM with DFSMN in the acoustic modeling based on CCT, then combine convolution neural network (CNN) with this architecture to train an acoustic model based on CNN-DFSMN-CTC, match the acoustic model with the 3-gram language model, and combine dictionary and acoustic feature vector to identify and decode the recognition text. This further improves the performance of Tibetan speech recognition. The last experiment results show that the WER of DFSMN-CTC based methods is 2.34% and 0.94% higher than that of CNN-CTC based and LSTM-CTC based methods under the same test set. The recognition rate based on CNN-DFSMN-CTC is 3.52% and 2.23% higher than that based on DFSMN and DFSMN-CTC.
科研通智能强力驱动
Strongly Powered by AbleSci AI