Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning
对抗制
计算机科学
稳健性(进化)
人工智能
对抗性机器学习
机器学习
生物化学
基因
化学
作者
Haibin Wu,Xü Liu,Andy T. Liu,Zhiyong Wu,Helen Meng,Hung-yi Lee
出处
期刊:IEEE/ACM transactions on audio, speech, and language processing [Institute of Electrical and Electronics Engineers] 日期:2022-01-01卷期号:30: 202-217被引量:11
Previous works have shown that automatic speaker verification (ASV) is seriously vulnerable to malicious spoofing attacks, such as replay, synthetic speech, and recently emerged adversarial attacks. Great efforts have been dedicated to defending ASV against replay and synthetic speech; however, only a few approaches have been explored to deal with adversarial attacks. All the existing approaches to tackle adversarial attacks for ASV require the knowledge for adversarial samples generation, but it is impractical for defenders to know the exact attack algorithms that are applied by the in-the-wild attackers. This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms. Inspired by self-supervised learning models (SSLMs) that possess the merits of alleviating the superficial noise in the inputs and reconstructing clean samples from the interrupted ones, this work regards adversarial perturbations as one kind of noise and conducts adversarial defense for ASV by SSLMs. Specifically, we propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection. The purification module aims at alleviating the adversarial perturbations in the samples and pulling the contaminated adversarial inputs back towards the decision boundary. Experimental results show that our proposed purification module effectively counters adversarial attacks and outperforms traditional filters from both alleviating the adversarial noise and maintaining the performance of genuine samples. The detection module aims at detecting adversarial samples from genuine ones based on the statistical properties of ASV scores derived by a unique ASV integrating with different number of SSLMs. Experimental results show that our detection module helps shield the ASV by detecting adversarial samples. Both purification and detection methods are helpful for defending against different kinds of attack algorithms. Moreover, since there is no common metric for evaluating the ASV performance under adversarial attacks, this work also formalizes evaluation metrics for adversarial defense considering both purification and detection based approaches into account. We sincerely encourage future works to benchmark their approaches based on the proposed evaluation framework.