对抗制
计算机科学
仿射变换
稳健性(进化)
一般化
人工智能
深层神经网络
透视图(图形)
理论计算机科学
人工神经网络
机器学习
数学
生物化学
基因
数学分析
化学
纯数学
作者
Jincheng Li,Shuhai Zhang,Jiezhang Cao,Mingkui Tan
标识
DOI:10.1016/j.neunet.2023.03.008
摘要
Deep neural networks (DNNs) are vulnerable to adversarial examples with small perturbations. Adversarial defense thus has been an important means which improves the robustness of DNNs by defending against adversarial examples. Existing defense methods focus on some specific types of adversarial examples and may fail to defend well in real-world applications. In practice, we may face many types of attacks where the exact type of adversarial examples in real-world applications can be even unknown. In this paper, motivated by that adversarial examples are more likely to appear near the classification boundary and are vulnerable to some transformations, we study adversarial examples from a new perspective that whether we can defend against adversarial examples by pulling them back to the original clean distribution. We empirically verify the existence of defense affine transformations that restore adversarial examples. Relying on this, we learn defense transformations to counterattack the adversarial examples by parameterizing the affine transformations and exploiting the boundary information of DNNs. Extensive experiments on both toy and real-world data sets demonstrate the effectiveness and generalization of our defense method. The code is avaliable at https://github.com/SCUTjinchengli/DefenseTransformer.
科研通智能强力驱动
Strongly Powered by AbleSci AI