后门
计算机科学
深度学习
人工智能
过程(计算)
机器学习
修剪
人工神经网络
脆弱性(计算)
计算机安全
农学
生物
操作系统
作者
Xueluan Gong,Yanjiao Chen,Yang Wang,Qian Wang,Yuzhe Gu,Huayang Huang,Chao Shen
标识
DOI:10.1109/sp46215.2023.10179375
摘要
Recent works have revealed the vulnerability of deep neural networks to backdoor attacks, where a backdoored model orchestrates targeted or untargeted misclassification when activated by a trigger. A line of purification methods (e.g., fine-pruning, neural attention transfer, MCR [69]) have been proposed to remove the backdoor in a model. However, they either fail to reduce the attack success rate of more advanced backdoor attacks or largely degrade the prediction capacity of the model for clean samples. In this paper, we put forward a new purification defense framework, dubbed SAGE, which utilizes self-attention distillation to purge models of backdoors. Unlike traditional attention transfer mechanisms that require a teacher model to supervise the distillation process, SAGE can realize self-purification with a small number of clean samples. To enhance the defense performance, we further propose a dynamic learning rate adjustment strategy that carefully tracks the prediction accuracy of clean samples to guide the learning rate adjustment. We compare the defense performance of SAGE with 6 state-of-the-art defense approaches against 8 backdoor attacks on 4 datasets. It is shown that SAGE can reduce the attack success rate by as much as 90% with less than 3% decrease in prediction accuracy for clean samples. We will open-source our codes upon publication.
科研通智能强力驱动
Strongly Powered by AbleSci AI