稳健性(进化)
计算机科学
人工智能
深度学习
人工神经网络
对抗制
机器学习
上下文图像分类
深层神经网络
模式识别(心理学)
简单(哲学)
循环神经网络
MNIST数据库
算法
反向传播
图像(数学)
认识论
基因
哲学
生物化学
化学
作者
Seyed-Mohsen Moosavi-Dezfooli,Alhussein Fawzi,Pascal Frossard
出处
期刊:Computer Vision and Pattern Recognition
日期:2016-06-01
被引量:3170
标识
DOI:10.1109/cvpr.2016.282
摘要
State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.
科研通智能强力驱动
Strongly Powered by AbleSci AI