对抗制
计算机科学
认证
差别隐私
有界函数
计算机安全
人工智能
恶意软件
人工神经网络
稳健性(进化)
理论计算机科学
机器学习
数据挖掘
数学
生物化学
基因
数学分析
化学
法学
政治学
作者
Mathias Lécuyer,Vaggelis Atlidakis,Roxana Geambasu,Daniel Hsu,Suman Jana
出处
期刊:IEEE Symposium on Security and Privacy
日期:2019-05-01
被引量:486
标识
DOI:10.1109/sp.2019.00044
摘要
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to norm-bounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
科研通智能强力驱动
Strongly Powered by AbleSci AI