对抗制
计算机科学
稳健性(进化)
人工智能
一般化
深度学习
物理系统
编码(集合论)
水准点(测量)
机器学习
源代码
数学
数学分析
生物化学
化学
物理
集合(抽象数据类型)
大地测量学
量子力学
基因
程序设计语言
地理
操作系统
作者
Weiwei Feng,Nanqing Xu,Tianzhu Zhang,Baoyuan Wu,Yongdong Zhang
标识
DOI:10.1109/tifs.2023.3288426
摘要
Deep neural networks are known to be vulnerable to adversarial examples, where adding carefully crafted adversarial perturbations to the inputs can mislead the DNN model. However, it is challenging to generate effective adversarial examples in the physical world due to many uncontrollable physical dynamics, which pose security and safety threats in the real world. Current physical attack methods aim to generate robust physical adversarial examples by simulating all possible physical dynamics. If attacking a new image or a new DNN model, they require expensive manual efforts for simulating physical dynamics or considerable time for iteratively optimizing. To tackle these limitations, we propose a robust and generalized physical adversarial attack method with Meta-GAN (Meta-GAN Attack), which is able to not only generate robust physical adversarial examples, but also generalize to attacking novel images and novel DNN models by accessing a few digital and physical images. First, we propose to craft robust physical adversarial examples with a generative attack model via simulating color and shape distortions. Second, we formulate the physical attack as a few-shot learning problem and design a novel class-agnostic and model-agnostic meta-learning algorithm to solve this problem. Extensive experiments on two benchmark datasets with four challenging experimental settings verify the superior robustness and generalization of our method by comparing to state-of-the-art physical attack methods. The source code is released at github.
科研通智能强力驱动
Strongly Powered by AbleSci AI