对抗制
计算机科学
活泼
稳健性(进化)
人工智能
认证(法律)
生物识别
降噪
机器学习
计算机安全
理论计算机科学
生物化学
基因
化学
作者
Juzhen Wang,Yiqi Hu,Yiren Qi,Ziwen Peng,Changjia Zhou
标识
DOI:10.1109/tc.2021.3066614
摘要
Deep learning techniques were widely adopted in various scenarios as a service. However, they are found naturally exposed to adversarial attacks. Such imperceptible-perturbation-based attacks can cause severe damage in nowaday authentication systems that adopt DNNs as the core, such as fingerprint liveness detection systems, face recognition systems, etc. This paper avoids improving the model's robustness and realizes the defense against adversarial attacks based on denoising and reconstruction. Our proposed method can be viewed as a two-step defense framework. The first step denoises the input adversarial example, then reconstructing the sample to close to the original clean image and help the target model output the original label. The proposed method is evaluated using six kinds of state-of-art adversarial attacks, including the adaptive attacks, which are known as the strongest attacks.We also specifically focus on demonstrating the effectiveness of our proposed work in Finance Authentication systems as a real-life case study. Experimental results reveal that our method is more robust than the previous super-resolution-only defense in respect of attaining a higher averaging accuracy over clean and distorted samples. To the best of our knowledge, it's the first work that reveals a comprehensive defense framework against adversarial attacks over Finance Authentication systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI