对抗制
计算机科学
数据科学
深层神经网络
可扩展性
Python(编程语言)
Guard(计算机科学)
人工智能
领域(数学)
分类器(UML)
机器学习
人工神经网络
计算机安全
程序设计语言
数据库
纯数学
数学
作者
Yao Li,Minhao Cheng,Cho‐Jui Hsieh,Thomas C. Lee
标识
DOI:10.1080/00031305.2021.2006781
摘要
Despite the efficiency and scalability of machine learning systems, recent studies have demonstrated that many classification methods, especially Deep Neural Networks (DNNs), are vulnerable to adversarial examples; that is, examples that are carefully crafted to fool a well-trained classification model while being indistinguishable from natural data to human. This makes it potentially unsafe to apply DNNs or related methods in security-critical areas. Since this issue was first identified by Biggio et al. and Szegedy et al., much work has been done in this field, including the development of attack methods to generate adversarial examples and the construction of defense techniques to guard against such examples. This article aims to introduce this topic and its latest developments to the statistical community, primarily focusing on the generation and guarding of adversarial examples. Computing codes (in Python and R) used in the numerical experiments are publicly available for readers to explore the surveyed methods. It is the hope of the authors that this article will encourage more statisticians to work on this important and exciting field of generating and defending against adversarial examples.
科研通智能强力驱动
Strongly Powered by AbleSci AI