语音增强
计算机科学
话筒
语音识别
麦克风阵列
降噪
噪音(视频)
语音活动检测
语音处理
人工智能
电信
声压
图像(数学)
作者
Yang Yang,Shao-Fu Shih,Hakan Erdogan,Jamie Menjay Lin,Lee, Chehung,Yunpeng Liu,George Sung,Matthias Grundmann
标识
DOI:10.1109/icassp49357.2023.10096763
摘要
High quality speech capture has been widely studied for both voice communication and human computer interface reasons. To improve the capture performance, we can often find multi-microphone speech enhancement techniques deployed on various devices. Multi-microphone speech enhancement problem is often decomposed into two decoupled steps: a beamformer that provides spatial filtering and a single-channel speech enhancement model that cleans up the beamformer output. In this work, we propose a speech enhancement solution that takes both the raw microphone and beamformer outputs as the input for an ML model. We devise a simple yet effective training scheme that allows the model to learn from the cues of the beamformer by contrasting the two inputs and greatly boost its capability in spatial rejection, while conducting the general tasks of denoising and dereverberation. The proposed solution takes advantage of classical spatial filtering algorithms instead of competing with them. By design, the beamformer module then could be selected separately and does not require a large amount of data to be optimized for a given form factor, and the network model can be considered as a standalone module which is highly transferable independently from the microphone array. We name the ML module in our solution as GSENet, short for Guided Speech Enhancement Network. We demonstrate its effectiveness on real world data collected on multi-microphone devices in terms of the suppression of noise and interfering speech.
科研通智能强力驱动
Strongly Powered by AbleSci AI