后门
计算机科学
特征(语言学)
人工智能
深层神经网络
编码(集合论)
深度学习
可解释性
人工神经网络
公制(单位)
人气
机器学习
模式识别(心理学)
计算机安全
工程类
心理学
社会心理学
哲学
语言学
运营管理
集合(抽象数据类型)
程序设计语言
作者
Zhongliang Yue,Jun Xia,Zhiwei Ling,Ming Hu,Ting Wang,Xian Wei,Mingsong Chen
标识
DOI:10.1145/3581783.3612415
摘要
Due to the popularity of Artificial Intelligence (AI) techniques, we are witnessing an increasing number of backdoor injection attacks that are designed to maliciously threaten Deep Neural Networks (DNNs) causing misclassification. Although there exist various defense methods that can effectively erase backdoors from DNNs, they greatly suffer from both high Attack Success Rate (ASR) and a non-negligible loss in Benign Accuracy (BA). Inspired by the observation that a backdoored DNN tends to form a new cluster in its feature spaces for poisoned data, in this paper, we propose a novel two-stage backdoor defense method, named MCLDef, based on Model-Contrastive Learning (MCL). MCLDef can purify the backdoored model by pulling the feature representations of poisoned data towards those of their clean data counterparts. Due to the shrunken cluster of poisoned data, the backdoor formed by end-to-end supervised learning can be effectively eliminated. Comprehensive experimental results show that, with only 5% of clean data, MCLDef significantly outperforms state-of-the-art defense methods by up to 95.79% reduction in ASR, while in most cases, the BA degradation can be controlled within less than 2%. Our code is available at https://github.com/Zhihao151/MCL.
科研通智能强力驱动
Strongly Powered by AbleSci AI