计算机科学
语音识别
混响
机器人
人机交互
背景(考古学)
话筒
语音活动检测
特征(语言学)
波束赋形
语音增强
滤波器(信号处理)
任务(项目管理)
噪音(视频)
语音处理
人工智能
计算机视觉
声学
古生物学
电信
语言学
哲学
物理
管理
声压
经济
图像(数学)
生物
作者
Nicolás Grágeda,Carlos Busso,Eduardo Alvarado,Ricardo García,Rodrigo Mahú,Fernando Huenupán,Néstor Becerra Yoma
标识
DOI:10.1016/j.csl.2024.101666
摘要
The use of speech-based solutions is an appealing alternative to communicate in human-robot interaction (HRI). An important challenge in this area is processing distant speech which is often noisy, and affected by reverberation and time-varying acoustic channels. It is important to investigate effective speech solutions, especially in dynamic environments where the robots and the users move, changing the distance and orientation between a speaker and the microphone. This paper addresses this problem in the context of speech emotion recognition (SER), which is an important task to understand the intention of the message and the underlying mental state of the user. We propose a novel setup with a PR2 robot that moves as target speech and ambient noise are simultaneously recorded. Our study not only analyzes the detrimental effect of distance speech in this dynamic robot-user setting for speech emotion recognition but also provides solutions to attenuate its effect. We evaluate the use of two beamforming schemes to spatially filter the speech signal using either delay-and-sum (D&S) or minimum variance distortionless response (MVDR). We consider the original training speech recorded in controlled situations, and simulated conditions where the training utterances are processed to simulate the target acoustic environment. We consider the case where the robot is moving (dynamic case) and not moving (static case). For speech emotion recognition, we explore two state-of-the-art classifiers using hand-crafted features implemented with the ladder network strategy and learned features implemented with the wav2vec 2.0 feature representation. MVDR led to a signal-to-noise ratio higher than the basic D&S method. However, both approaches provided very similar average concordance correlation coefficient (CCC) improvements equal to 116% with the HRI subsets using the ladder network trained with the original MSP-Podcast training utterances. For the wav2vec 2.0-based model, only D&S led to improvements. Surprisingly, the static and dynamic HRI testing subsets resulted in a similar average concordance correlation coefficient. Finally, simulating the acoustic environment in the training dataset provided the highest average concordance correlation coefficient scores with the HRI subsets that are just 29% and 22% lower than those obtained with the original training/testing utterances, with ladder network and wav2vec 2.0, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI