对抗制
计算机科学
稳健性(进化)
功能(生物学)
集合(抽象数据类型)
梯度下降
人工神经网络
算法
数学优化
人工智能
数学
生物化学
化学
进化生物学
生物
基因
程序设计语言
作者
Chen Wan,Fangjun Huang,Xianfeng Zhao
标识
DOI:10.1109/tmm.2023.3255742
摘要
Deep neural networks (DNNs) are vulnerable to adversarial attacks which can fool the classifiers by adding small perturbations to the original example. The added perturbations in most existing attacks are mainly determined by the gradient of the loss function with respect to the current example. In this paper, a new average gradient-based adversarial attack is proposed. In our proposed method, via utilizing the gradient of each iteration in the past, a dynamic set of adversarial examples is constructed first in each iteration. Then, according to the gradient of the loss function with respect to all the examples in the constructed dynamic set and the current adversarial example, the average gradient can be calculated, which is used to determine the added perturbations. Different from the existing adversarial attacks, the proposed average gradient-based attack optimizes the added perturbations through a dynamic set of adversarial examples, where the size of the dynamic set increases with the number of iterations. Our proposed method possesses good extensibility and can be integrated into most existing gradient-based attacks. Extensive experiments demonstrate that, compared with the state-of-the-art gradient-based adversarial attacks, the proposed attack can achieve higher attack success rates and exhibit better transferability, which is helpful to evaluate the robustness of the network and the effectiveness of the defense method.
科研通智能强力驱动
Strongly Powered by AbleSci AI