对抗制
计算机科学
语音识别
控制(管理)
访问控制
人工智能
自然语言处理
计算机网络
作者
H.F. Chen,Jie Zhang,Kejiang Chen,Weiming Zhang,Nenghai Yu
出处
期刊:IEEE transactions on artificial intelligence
[Institute of Electrical and Electronics Engineers]
日期:2023-06-14
卷期号:5 (3): 1302-1315
标识
DOI:10.1109/tai.2023.3285858
摘要
Deep neural networks (DNNs) have achieved remarkable success across various domains, and their commercial value has led to their classification as intellectual property (IP) for their creators. While model watermarking is commonly employed for DNN IP protection, it is limited to post hoc forensics. In contrast, model access control offers a more effective proactive approach to prevent IP infringement through authentication. However, existing model access control methods primarily focus on image classification models and are not suitable for automatic speech recognition (ASR) models, which are also widely used in commercial applications. To address the above limitation, inspired by audio adversarial examples, we propose the first model access control scheme for the IP protection of ASR models, which utilizes audio adversarial examples with target labels as user identity information, serving as identity-proof samples. However, a unique challenge arises in the form of interception attacks, in which an attacker detects and hijacks an authorized sample to bypass the authentication process. To remedy it, we introduce the hidden adversarial examples (HAEs) for authentication, which embed the authorized information by slightly modifying the logits and behaving like clean audios, thereby making them difficult to be detected by analyzing the predicted results. To further evade detection by steganalysis, which can be employed for adversarial example detection, we design a distortion cost function inspired by adaptive steganography to guide the generation of HAEs. We conduct extensive experiments on the open-source ASR system DeepSpeech, demonstrating that our proposed scheme effectively protects ASR models proactively and is resistant to unique interception attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI