单声道
变压器
计算机科学
编码(内存)
语音识别
人工智能
电气工程
工程类
电压
作者
Qiquan Zhang,Ge Meng,Hongxu Zhu,Eliathamby Ambikairajah,Qi Song,Zhaoheng Ni,Haizhou Li
标识
DOI:10.1109/icassp48485.2024.10446337
摘要
Transformer architecture has enabled recent progress in speech enhancement. Since Transformers are position-agostic, positional encoding is the de facto standard component used to enable Transformers to distinguish the order of elements in a sequence. However, it remains unclear how positional encoding exactly impacts speech enhancement based on Transformer architectures. In this paper, we perform a comprehensive empirical study evaluating five positional encoding methods, i.e., Sinusoidal and learned absolute position embedding (APE), T5-RPE, KERPLE, as well as the Transformer without positional encoding (No-Pos), across both causal and noncausal configurations. We conduct extensive speech enhancement experiments, involving spectral mapping and masking methods. Our findings establish that positional encoding is not quite helpful for the models in a causal configuration, which indicates that causal attention may implicitly incorporate position information. In a noncausal configuration, the models significantly benefit from the use of positional encoding. In addition, we find that among the four position embeddings, relative position embeddings outperform APEs.
科研通智能强力驱动
Strongly Powered by AbleSci AI