鉴别器
公制(单位)
计算机科学
发电机(电路理论)
边距(机器学习)
特征(语言学)
语音识别
人工智能
模式识别(心理学)
机器学习
工程类
运营管理
物理
电信
功率(物理)
语言学
哲学
量子力学
探测器
作者
Heng Guo,Haifang Jian,Yequan Wang,Hongchang Wang,Xiao‐Fan Zhao,Wenwu Zhu,Cheng Qian
标识
DOI:10.1016/j.apacoust.2023.109385
摘要
In the speech enhancement (SE) task, the mismatch between the objective function used to train the SE model, and the evaluation metric will lead to the low quality of the generated speech. Although existing studies have attempted to use the metric discriminator to learn the alternative function of evaluation metric from data to guide generator updates, the metric discriminator’s simple structure cannot better approximate the function of the evaluation metric, thus limiting the performance of SE. This paper proposes a multiscale attention metric generative adversarial network (MAMGAN) to resolve this problem. In the metric discriminator, the attention mechanism is introduced to emphasize the meaningful features of spatial direction and channel direction to avoid the feature loss caused by direct average pooling to better approximate the calculation of the evaluation metric and further improve SE’s performance. In addition, driven by the effectiveness of the self-attention mechanism in capturing long-term dependence, we construct a multiscale attention module (MSAM). It fully considers the multiple representations of signals, which can better model the features of long sequences. The ablation experiment verifies the effectiveness of the attention metric discriminator and the MSAM. Quantitative analysis on the Voice Bank + DEMAND dataset shows that MAMGAN outperforms various time-domain SE methods with a 3.30 perceptual evaluation of speech quality score.
科研通智能强力驱动
Strongly Powered by AbleSci AI