后门
计算机科学
人工智能
水准点(测量)
任务(项目管理)
边缘设备
深度学习
人工神经网络
机器学习
对偶(语法数字)
数据挖掘
计算机安全
操作系统
文学类
大地测量学
艺术
经济
云计算
管理
地理
作者
Yan Jin,Yingchi Mao,Hongguang Nie,Zijian Tu,Ji Huang
标识
DOI:10.1109/bigdataservice55688.2022.00030
摘要
As a distributed machine learning paradigm, federated learning allows clients to collaboratively train models without sharing their private data, effectively solving data privacy issues in edge computing scenarios. However, recent studies have shown that neural network models in federated learning are vulnerable to backdoor attacks, which make the global model give wrong inference results in a high-confidence manner, such as recognizing stop signs as speed limit signs in the image classification task. This will have serious consequences. Aiming at the problem that the existing federated learning defense methods take a long time to compute and cannot destroy the matching relationship between triggers and backdoors, a federated learning backdoor attack defense based on dual attention mechanism (FDDAM) is proposed. The model weights are dynamically adjusted during training process, no additional models are required, and the calculation time is shorter. First, in order for the model to ignore triggers, the enhancement on image semantics is performed and then build channel attention map. Second, in order to destroy the matching relationship between triggers and backdoors, a feature map space transformation network is constructed. Finally, in order to improve the defense success rate, the channel attention map and the spatial attention map are weighted to construct a dual attention network. Experiments with FDDAM on image classification datasets show an average increase of 1.68% and 3.11% in model accuracy and defense success rate, and an average reduction of 1.85 times in computation time compared to the benchmark method.
科研通智能强力驱动
Strongly Powered by AbleSci AI