上传
计算机科学
算法
拜占庭式建筑
独立同分布随机变量
Byzantine容错
服务器
滤波器(信号处理)
光学(聚焦)
机器学习
人工智能
计算机网络
分布式计算
万维网
数学
统计
光学
物理
随机变量
历史
容错
计算机视觉
古代史
作者
Qi Xia,Zeyi Tao,Qun Li,Songqing Chen
出处
期刊:IEEE Transactions on Network Science and Engineering
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:: 1-13
被引量:2
标识
DOI:10.1109/tnse.2023.3251196
摘要
In federated learning, workers periodically upload locally computed weights to a federated learning server (FL server). When Byzantine attacks are presented in the system, attacked workers may upload incorrect weights to the parameter server, i.e., the information received by the FL server is not always the true values computed by workers. Previously proposed score-based, median-based, and distance-based defense algorithms made the following assumptions unrealistic in federated learning: (1) the dataset on each worker is independent and identically distributed (i.i.d.), and (2) the majority of all participating workers are honest. In federated learning, however, a worker may keep its non-i.i.d. private dataset and malicious workers may take over the majority in some iterations. In this paper, we focus on model poisoning type Byzantine attack and propose a novel reference dataset based algorithm along with a practical Two-Filter algorithm (ToFi) to defend against Byzantine attacks in federated learning. Our experiments highlight the effectiveness of our algorithm compared with previous algorithms in different settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI