计算机科学
特征学习
模态(人机交互)
特征(语言学)
人工智能
编码器
语音识别
模式
卷积神经网络
深度学习
循环神经网络
情绪识别
模式识别(心理学)
人工神经网络
社会科学
哲学
语言学
社会学
操作系统
作者
Samuel Kakuba,Dong Seog Han
标识
DOI:10.1109/icufn57995.2023.10199790
摘要
The interactive speech between two or more inter locutors involves the text and acoustic modalities. These modalities consist of intra and cross-modality relationships at different time intervals which if modeled well, can avail emotionally rich cues for robust and accurate prediction of emotion states. This necessitates models that take into consideration long short-term dependency between the current, previous, and future time steps using multimodal approaches. Moreover, it is important to contextualize the interactive speech in order to accurately infer the emotional state. A combination of recurrent and/or convolutional neural networks with attention mechanisms is often used by researchers. In this paper, we propose a deep learning-based bimodal speech emotion recognition (DLBER) model that uses multi-level fusion to learn intra and cross-modality feature representations. The proposed DLBER model uses the transformer encoder to model the intra-modality features that are combined at the first level fusion in the local feature learning block (LFLB). We also use self-attentive bidirectional LSTM layers to further extract intramodality features before the second level fusion for further progressive learning of the cross-modality features. The resultant feature representation is fed into another self-attentive bidirectional LSTM layer in the global feature learning block (GFLB). The interactive emotional dyadic motion capture (IEMOCAP) dataset was used to evaluate the performance of the proposed DLBER model. The proposed DLBER model achieves 72.93% and 74.05% of F1 score and accuracy respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI