Self-Attention-Based Convolutional GRU for Enhancement of Adversarial Speech Examples

计算机科学 对抗制 语音识别 稳健性(进化) 人工智能 语音增强 字错误率 公制(单位) 瓶颈 特征(语言学) 一般化 深度学习 模式识别(心理学) 降噪 数学 工程类 基因 数学分析 哲学 嵌入式系统 生物化学 语言学 化学 运营管理
作者
Chaitanya Jannu,Sunny Dayal Vanambathina
出处
期刊:International Journal of Image and Graphics [World Scientific]
卷期号:24 (06) 被引量:1
标识
DOI:10.1142/s0219467824500530
摘要

Recent research has identified adversarial examples which are the challenges to DNN-based ASR systems. In this paper, we propose a new model based on Convolutional GRU and Self-attention U-Net called [Formula: see text] to improve adversarial speech signals. To represent the correlation between neighboring noisy speech frames, a two-Layer GRU is added in the bottleneck of U-Net and an attention gate is inserted in up-sampling units to increase the adversarial stability. The goal of using GRU is to combine the weights sharing technique with the use of gates to control the flow of data across multiple feature maps. As a result, it outperforms the original 1D convolution used in [Formula: see text]. Especially, the performance of the model is evaluated by explainable speech recognition metrics and its performance is analyzed by the improved adversarial training. We used adversarial audio attacks to perform experiments on automatic speech recognition (ASR). We saw (i) the robustness of ASR models which are based on DNN can be improved using the temporal features grasped by the attention-based GRU network; (ii) through adversarial training, including some additive adversarial data augmentation, we could improve the generalization power of Automatic Speech Recognition models which are based on DNN. The word-error-rate (WER) metric confirmed that the enhancement capabilities are better than the state-of-the-art [Formula: see text]. The reason for this enhancement is the ability of GRU units to extract global information within the feature maps. Based on the conducted experiments, the proposed [Formula: see text] increases the score of Speech Transmission Index (STI), Perceptual Evaluation of Speech Quality (PESQ), and the Short-term Objective Intelligibility (STOI) with adversarial speech examples in speech enhancement.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
qwe完成签到,获得积分10
刚刚
Xdz完成签到 ,获得积分10
刚刚
cai完成签到 ,获得积分10
3秒前
雨恋凡尘完成签到,获得积分0
6秒前
羊羔肉完成签到,获得积分10
8秒前
胖丁完成签到,获得积分10
8秒前
笨笨凡松完成签到,获得积分10
11秒前
飞舞伤寒完成签到,获得积分10
11秒前
贝利亚完成签到,获得积分10
13秒前
喜多多的小眼静完成签到 ,获得积分10
13秒前
13秒前
Dsunflower完成签到 ,获得积分10
14秒前
羊羔肉发布了新的文献求助50
15秒前
半夏发布了新的文献求助10
15秒前
16秒前
16秒前
大橙子发布了新的文献求助10
17秒前
星辰大海应助贝利亚采纳,获得10
17秒前
18秒前
sunny心晴完成签到 ,获得积分10
20秒前
独特的凝云完成签到 ,获得积分10
20秒前
TheDing完成签到,获得积分10
21秒前
传奇3应助lenetivy采纳,获得10
23秒前
积极的忆曼完成签到,获得积分10
24秒前
量子星尘发布了新的文献求助10
24秒前
酒剑仙完成签到,获得积分10
24秒前
YANGMJ完成签到,获得积分10
25秒前
xialuoke完成签到,获得积分10
25秒前
scinature发布了新的文献求助10
26秒前
26秒前
26秒前
小洪俊熙完成签到,获得积分10
28秒前
狄百招完成签到 ,获得积分10
28秒前
UU完成签到,获得积分10
29秒前
半夏完成签到,获得积分10
31秒前
Judy完成签到 ,获得积分10
31秒前
跳跳糖完成签到,获得积分10
32秒前
JS完成签到,获得积分10
33秒前
胡萝卜完成签到,获得积分10
34秒前
Hunter完成签到,获得积分10
34秒前
高分求助中
【提示信息,请勿应助】关于scihub 10000
Les Mantodea de Guyane: Insecta, Polyneoptera [The Mantids of French Guiana] 3000
徐淮辽南地区新元古代叠层石及生物地层 3000
The Mother of All Tableaux: Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 3000
Handbook of Industrial Diamonds.Vol2 1100
Global Eyelash Assessment scale (GEA) 1000
Picture Books with Same-sex Parented Families: Unintentional Censorship 550
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4038201
求助须知:如何正确求助?哪些是违规求助? 3575940
关于积分的说明 11373987
捐赠科研通 3305747
什么是DOI,文献DOI怎么找? 1819274
邀请新用户注册赠送积分活动 892662
科研通“疑难数据库(出版商)”最低求助积分说明 815022