解耦(概率)
对抗制
特征(语言学)
计算机科学
人工智能
模式识别(心理学)
算法
控制工程
工程类
哲学
语言学
作者
Weidong Wang,Zhi Li,Shuaiwei Liu,Li Zhang,Jin Yang,Yi Wang
标识
DOI:10.1016/j.imavis.2024.104931
摘要
Recently, it was found that deep neural networks (DNNs) are susceptible to adversarial input perturbations. Most defense strategies adopt the denoising method based on preprocessing, which mitigates the impacts of adversarial perturbations on DNNs by learning the distributions of nonadversarial datasets and projecting adversarial inputs into the learned nonadversarial manifolds. However, existing defense strategies commonly focus on reconstructing clean images while ignoring the role of adversarial perturbations, which results in the reconstructed images failing to achieve the visual quality and classification accuracy of the original clean images, and the induced adversarial robustness improvement is limited. This paper proposes a feature decoupling-interaction network (FDIN), which introduces the concepts of clean features and adversarial features to separate the two kinds of features from the input adversarial examples (AEs) in a feature decoupling-interaction manner. The clean features are used to reconstruct the input image so that it is infinitely close to the original clean image, and the adversarial features are used to reconstruct the adversarial perturbations. Adversarial perturbations are removed from the adversarial examples across multiple cross cycles to improve further the reconstructed image's visual quality and classification accuracy. The features of the original clean image are used as prior knowledge to guide the network to learn the clean features of the adversarial examples and improve the classification accuracy of the model on the clean examples. In addition, a classification loss function based on the Carlini & Wagner (CW) attack algorithm is used instead of the conventional cross-entropy loss function to improve the adversarial robustness of the FDIN. The experimental results show that the proposed method achieves better defense performance than the current state-of-the-art methods on both standard tests and various attack tests and even exceeds the test accuracy of the target classifier on the original test set.
科研通智能强力驱动
Strongly Powered by AbleSci AI