计算机科学
水准点(测量)
代表(政治)
光学(聚焦)
人工智能
比例(比率)
情绪识别
编码(集合论)
语音识别
适应(眼睛)
变化(天文学)
时间尺度
自然语言处理
心理学
神经科学
集合(抽象数据类型)
物理
光学
程序设计语言
法学
政治
地理
生物
量子力学
天体物理学
生态学
政治学
大地测量学
作者
Jiaxin Ye,Xin-Cheng Wen,Yujie Wei,Yong Xu,Kunhong Liu,Hongming Shan
标识
DOI:10.1109/icassp49357.2023.10096370
摘要
Speech emotion recognition (SER) plays a vital role in improving the interactions between humans and machines by inferring human emotion and affective states from speech signals. Whereas recent works primarily focus on mining spatiotemporal information from hand-crafted features, we explore how to model the temporal patterns of speech emotions from dynamic temporal scales. Towards that goal, we introduce a novel temporal emotional modeling approach for SER, termed Temporal-aware bI-direction Multi-scale Network (TIM-Net), which learns multi-scale contextual affective representations from various time scales. Specifically, TIM-Net first employs temporal-aware blocks to learn temporal affective representation, then integrates complementary information from the past and the future to enrich contextual representations, and finally fuses multiple time scale features for better adaptation to the emotional variation. Extensive experimental results on six benchmark SER datasets demonstrate the superior performance of TIM-Net, gaining 2.34% and 2.61% improvements of the average UAR and WAR over the second-best on each corpus. The source code is available at https://github.com/Jiaxin-Ye/TIM-Net_SER.
科研通智能强力驱动
Strongly Powered by AbleSci AI