计算机科学
因果关系(物理学)
人工智能
判别式
语音识别
组分(热力学)
领域(数学)
代表(政治)
特征学习
一般化
依赖关系(UML)
深度学习
自然语言处理
作者
Jia-Xin Ye,Xin-Cheng Wen,Xuan-Ze Wang,Yong Xu,Yan Luo,Chang-Li Wu,Li-Yan Chen,Kun-Hong Liu
标识
DOI:10.1016/j.specom.2022.07.005
摘要
• . This paper proposes a novel network architecture called GM-TCNet for Speech Emotion Recognition based on the dilated causal convolutions and gating mechanism. • . A novel emotional causality representation learning component is designed to capture the dynamics of • emotion across time domain, and better model the speech emotions at the frame level. It also has a strong ability in building a reliable long-term sentimental dependency. To the best of our knowledge, this is the first attempt at applying the causality learning method to SER. • . GM-TCNet uses the skip connection among all Gated Convolution Blocks. It provides our network structure with a multi-scale temporal receptive field to improve its generalization ability. Moreover, a new dilated rate distribution of blocks is designed to obtain a larger receptive field, better fitting the SER applications. • . The proposed GM-TCNet approach gains state-of-the-art results in four widely studied datasets compared with other advanced approaches. In human-computer interaction, Speech Emotion Recognition (SER) plays an essential role in understanding the user's intent and improving the interactive experience. While similar sentimental speeches own diverse speaker characteristics but share common antecedents and consequences, an essential challenge for SER is how to produce robust and discriminative representations through causality between speech emotions. In this paper, we propose a Gated Multi-scale Temporal Convolutional Network (GM-TCNet) to construct a novel emotional causality repre- sentation learning component with a multi-scale receptive field. GM-TCNet deploys a novel emotional causality representation learning component to capture the dynamics of emotion across the time domain, constructed with dilated causal convolutions layer and gating mechanism. Besides, it utilizes skip connection fusing high-level fea- tures from different Gated Convolution Blocks (GCB) to capture abundant and subtle emotion changes in human speech. GM-TCNet first uses a single type of feature, Mel-Frequency Cepstral Coefficients (MFCC), as inputs and then passes them through the Gated Temporal Convolutional Module (GTCM) to generate the high-level fea- tures. Finally, the features are fed to the emotion classifier to accomplish the SER task. The experimental results show that our model maintains the highest performance in most cases, with +0.90% to +18.50% and +0.55% to +20.15% average relative improvement on the weighted average recall and unweighted average recall compared to state-of-the-art techniques. The source code is available at: https://github.com/Jiaxin-Ye/GM-TCNet for SER.
科研通智能强力驱动
Strongly Powered by AbleSci AI