期刊:IEEE Transactions on Dependable and Secure Computing [Institute of Electrical and Electronics Engineers] 日期:2024-01-01卷期号:: 1-15被引量:2
标识
DOI:10.1109/tdsc.2024.3354049
摘要
As a privacy-preserving distributed learning paradigm, federated learning (FL) has been proven to be vulnerable to various attacks, among which backdoor attack is one of the toughest. In this attack, malicious users attempt to embed backdoor triggers into local models, resulting in the crafted inputs being misclassified as the targeted labels. To address such attack, several defense mechanisms are proposed, but may lose the effectiveness due to the following drawbacks. First, current methods heavily rely on massive labeled clean data, which is an impractical setting in FL. Moreover, an in-avoidable performance degradation usually occurs in the defensive procedure. To alleviate such concerns, we propose BadCleaner , a lossless and efficient backdoor defense scheme via attention-based federated multi-teacher distillation. Firstly, BadCleaner can effectively tune the backdoored joint model without performance degradation, by distilling the in-depth knowledge from multiple teachers with only a small part of unlabeled clean data. Secondly, to fully eliminate the hidden backdoor patterns, we present an attention transfer method to alleviate the attention of models to the trigger regions. The extensive evaluation demonstrates that BadCleaner can reduce the success rates of state-of-the-art backdoor attacks without compromising the model performance.