计算机科学
话语
特征(语言学)
特征学习
人工智能
语音识别
模式识别(心理学)
代表(政治)
帧(网络)
卷积神经网络
情绪分类
光学(聚焦)
特征提取
融合机制
融合
哲学
物理
电信
光学
法学
脂质双层融合
语言学
政治
政治学
作者
Zengzhao Chen,Jiawen Li,Hai Liu,Xuyang Wang,Wang Hu,Qiuyu Zheng
标识
DOI:10.1016/j.eswa.2022.118943
摘要
Speech emotion recognition (SER) has become a crucial topic in the field of human–computer interactions. Feature representation plays an important role in SER, but there are still many challenges in feature representation such as the inability to predict which features are most effective for SER and the cultural differences in emotion expression. Most previous studies use a single type of feature for the recognition task or conduct early fusion of features. However, a single type of feature cannot well reflect the emotions of speech signals. Also, different features contain different information, direct fusion cannot integrate the advantages of different features. To overcome these challenges, this paper proposes a parallel network for multi-scale SER based on a connection attention mechanism (AMSNet). AMSNet fuses fine-grained frame-level manual features with coarse-grained utterance-level deep features. Meanwhile, it adopts different speech emotion feature extraction modules according to the temporal and spatial features of speech signals, which enriches features and improves feature characterization. The network consists of a frame-level representation learning module (FRLM) based on the time structure and an utterance-level representation learning module (URLM) based on the global structure. Besides, improved attention-based long short-term memory (LSTM) is introduced into FRLM to focus on the frames that contribute more to the final emotion recognition result. In URLM, a convolutional neural network with the squeeze-and-excitation block (SCNN) is introduced to extract deep features. In addition, the connection attention mechanism is proposed for feature fusion, which applies different weights to different features. Extensive experiments are conducted on the IEMOCAP and EmoDB datasets, and the results demonstrate the effectiveness and performance superiority of AMSNet. Our code will be publicly available at https://codeocean.com/capsule/8636967/tree/v1.
科研通智能强力驱动
Strongly Powered by AbleSci AI