可解释性
计算机科学
约束(计算机辅助设计)
人工智能
卷积神经网络
代表(政治)
特征(语言学)
情绪识别
深度学习
特征学习
感知
机器学习
自然语言处理
语音识别
模式识别(心理学)
数学
语言学
哲学
几何学
神经科学
政治
政治学
法学
生物
作者
Erkang Jing,Yezheng Liu,Yidong Chai,Jun Sun,Sagar Samtani,Yuanchun Jiang,Qian Yang
标识
DOI:10.1016/j.ipm.2023.103501
摘要
This paper focuses on the active interpretability for deep learning-based speech emotion recognition (SER). To achieve this, we propose an explicit feature constrained model, the interpretable group convolutional neural network (IG-CNN) model. In the proposed model, we first introduce the interpretability constraint to learn human-understandable interpretable representations. The emotion prediction decision can be active interpreted via the model coefficients. To acquire more representations beyond interpretable ones, and ensure they are useful for SER, we then design the uncorrelation constraint between interpretable and autonomous representations and introduce group CNN structure. We test the model on IEMOCAP, RAVDESS, eNTERFACE'05, and CREMA-D datasets. Experimental results show that our model outperforms all the baselines. In addition, the proposed model can also learn the patterns of human perception of speech emotion and provide explanation for the recognition results.
科研通智能强力驱动
Strongly Powered by AbleSci AI