串联(数学)
特征(语言学)
计算机科学
骨料(复合)
融合
块(置换群论)
编码(集合论)
模式识别(心理学)
信号(编程语言)
残余物
说话人识别
图层(电子)
建筑
人工智能
语音识别
算法
集合(抽象数据类型)
数学
艺术
哲学
语言学
材料科学
几何学
组合数学
视觉艺术
复合材料
程序设计语言
化学
有机化学
作者
Yafeng Chen,Siqi Zheng,Hui Wang,Luyao Cheng,Qian Chen,Jiajun Qi
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:1
标识
DOI:10.48550/arxiv.2305.12838
摘要
Effective fusion of multi-scale features is crucial for improving speaker verification performance. While most existing methods aggregate multi-scale features in a layer-wise manner via simple operations, such as summation or concatenation. This paper proposes a novel architecture called Enhanced Res2Net (ERes2Net), which incorporates both local and global feature fusion techniques to improve the performance. The local feature fusion (LFF) fuses the features within one single residual block to extract the local signal. The global feature fusion (GFF) takes acoustic features of different scales as input to aggregate global signal. To facilitate effective feature fusion in both LFF and GFF, an attentional feature fusion module is employed in the ERes2Net architecture, replacing summation or concatenation operations. A range of experiments conducted on the VoxCeleb datasets demonstrate the superiority of the ERes2Net in speaker verification. Code has been made publicly available at https://github.com/alibaba-damo-academy/3D-Speaker.
科研通智能强力驱动
Strongly Powered by AbleSci AI