亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Residual Fusion Probabilistic Knowledge Distillation for Speech Enhancement

概率逻辑 计算机科学 残余物 加权 人工智能 机器学习 成对比较 蒸馏 卷积神经网络 深度学习 帧(网络) 算法 医学 电信 有机化学 放射科 化学
作者
Jiaming Cheng,Ruiyu Liang,Lin Zhou,Li Zhao,Chengwei Huang,Björn W. Schuller
出处
期刊:IEEE/ACM transactions on audio, speech, and language processing [Institute of Electrical and Electronics Engineers]
卷期号:32: 2680-2691
标识
DOI:10.1109/taslp.2024.3395978
摘要

In recent years, a great deal of research has focused on in developing neural network (NN)-based speech enhancement (SE) models, which have achieved promising results. However, NN-based models typically require expensive computations to achieve remarkable performance, constraining their deployment in real-world scenarios, especially when hardware resources are limited or when latency requirements are strict. To reduce this computational burden, we propose a unified residual fusion probabilistic knowledge distillation (KD) method for the SE task, in which knowledge is transferred from a deep teacher to a shallower student model. Previous KD approaches commonly focused on narrowing the output distances between teachers and students, but research on the intermediate representation of these models is lacking. In this paper, we first study the cross-layer residual feature fusion strategy, which enables the student model to distill knowledge contained in multiple teacher layers from shallow to deep. Second, a frame weighting probabilistic distillation loss is proposed to assign more emphasis to frames containing essential information and preserve pairwise probabilistic similarities in the representation space. The proposed distillation framework is applied to the dual-path dilated convolutional recurrent network (DPDCRN), which won the championship of the SE track in the L3DAS23 challenge. Extensive experiments are conducted on single-channel and multichannel SE datasets. Objective evaluations show that the proposed KD strategy outperforms other distillation methods and considerably improves the enhancement effect of the low-complexity student model (with only 17% of the teacher's parameters).
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
财年完成签到,获得积分10
7秒前
田乐天完成签到 ,获得积分10
12秒前
tszjw168完成签到 ,获得积分10
14秒前
Dana完成签到 ,获得积分10
15秒前
20秒前
七草肃完成签到,获得积分10
21秒前
悦果完成签到 ,获得积分10
22秒前
机智乐珍完成签到,获得积分20
22秒前
23秒前
DW发布了新的文献求助10
23秒前
hahaha发布了新的文献求助10
25秒前
26秒前
机智乐珍发布了新的文献求助10
26秒前
英俊的铭应助财年采纳,获得10
27秒前
m(_._)m完成签到 ,获得积分0
27秒前
27秒前
29秒前
任性大米发布了新的文献求助10
30秒前
勤恳白云完成签到,获得积分10
31秒前
joe完成签到 ,获得积分0
33秒前
andrele应助SAIL采纳,获得10
35秒前
shuai发布了新的文献求助10
36秒前
bkagyin应助shuai采纳,获得10
40秒前
风趣的芝麻完成签到 ,获得积分10
50秒前
qyh关闭了qyh文献求助
51秒前
Hello应助木木采纳,获得10
53秒前
张流筝完成签到 ,获得积分10
56秒前
xingxing完成签到 ,获得积分10
58秒前
1分钟前
1分钟前
1分钟前
1分钟前
中陆完成签到,获得积分10
1分钟前
1分钟前
1分钟前
星辰大海应助Xyx采纳,获得10
1分钟前
你好呀嘻嘻完成签到 ,获得积分10
1分钟前
白米完成签到 ,获得积分10
1分钟前
Ava应助SUN采纳,获得10
1分钟前
中海完成签到,获得积分10
1分钟前
高分求助中
Production Logging: Theoretical and Interpretive Elements 2000
Very-high-order BVD Schemes Using β-variable THINC Method 1200
BIOLOGY OF NON-CHORDATES 1000
进口的时尚——14世纪东方丝绸与意大利艺术 Imported Fashion:Oriental Silks and Italian Arts in the 14th Century 800
Autoregulatory progressive resistance exercise: linear versus a velocity-based flexible model 550
Education and Upward Social Mobility in China: Imagining Positive Sociology with Bourdieu 500
Zeitschrift für Orient-Archäologie 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 细胞生物学 免疫学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3353435
求助须知:如何正确求助?哪些是违规求助? 2978016
关于积分的说明 8683528
捐赠科研通 2659372
什么是DOI,文献DOI怎么找? 1456201
科研通“疑难数据库(出版商)”最低求助积分说明 674297
邀请新用户注册赠送积分活动 665016