期刊:IEEE/ACM transactions on audio, speech, and language processing [Institute of Electrical and Electronics Engineers] 日期:2022-01-01卷期号:30: 1374-1385被引量:35
Deep neural networks (DNNs) represent the mainstream methodology for supervised speech enhancement, primarily due to their capability to model complex functions using hierarchical representations. However, a recent study revealed that DNNs trained on a single corpus fail to generalize to untrained corpora, especially in low signal-to-noise ratio (SNR) conditions. Developing a noise, speaker, and corpus independent speech enhancement algorithm is essential for real-world applications. In this study, we propose a self-attending recurrent neural network, or attentive recurrent network (ARN), for time-domain speech enhancement to improve cross-corpus generalization. ARN comprises of recurrent neural networks (RNNs) augmented with self-attention blocks and feedforward blocks. We evaluate ARN on different corpora with nonstationary noises in low SNR conditions. Experimental results demonstrate that ARN substantially outperforms competitive approaches to time-domain speech enhancement, such as RNNs and dual-path ARNs. Additionally, we report an important finding that the two popular approaches to speech enhancement: complex spectral mapping and time-domain enhancement, obtain similar results for RNN and ARN with large-scale training. We also provide a challenging subset of the test set used in this study for evaluating future algorithms and facilitating direct comparisons.