计算机科学
MNIST数据库
对抗制
人工智能
机器学习
任务(项目管理)
深层神经网络
探测器
构造(python库)
人工神经网络
一次性
深度学习
数据挖掘
计算机网络
工程类
经济
管理
机械工程
电信
作者
Chen Ma,Chenxu Zhao,Hailin Shi,Li Chen,Jun‐Hai Yong,Dan Zeng
标识
DOI:10.1145/3343031.3350887
摘要
Deep neural networks (DNNs) are vulnerable to the adversarial attack which is maliciously implemented by adding human-imperceptible perturbation to images and thus leads to incorrect prediction. Existing studies have proposed various methods to detect the new adversarial attacks. However, new attack methods keep evolving constantly and yield new adversarial examples to bypass the existing detectors. It needs to collect tens of thousands samples to train detectors, while the new attacks evolve much more frequently than the high-cost data collection. Thus, this situation leads the newly evolved attack samples to remain in small scales. To solve such few-shot problem with the evolving attacks, we propose a meta-learning based robust detection method to detect new adversarial attacks with limited examples. Specifically, the learning consists of a double-network framework: a task-dedicated network and a master network which alternatively learn the detection capability for either seen attack or a new attack. To validate the effectiveness of our approach, we construct the benchmarks with few-shot-fashion protocols based on three conventional datasets, i.e. CIFAR-10, MNIST and Fashion-MNIST. Comprehensive experiments are conducted on them to verify the superiority of our approach with respect to the traditional adversarial attack detection methods. The implementation code is available online.
科研通智能强力驱动
Strongly Powered by AbleSci AI