Byzantine容错
计算机科学
拜占庭式建筑
上传
集合(抽象数据类型)
特征(语言学)
人工智能
机器学习
联合学习
一套
容错
分布式计算
万维网
考古
哲学
古代史
程序设计语言
历史
语言学
作者
Xingxing Tang,Hanlin Gu,Lixin Fan,Qiang Yang
标识
DOI:10.1007/978-3-031-33377-4_32
摘要
Federated learning (FL) is a suite of technology that allows multiple distributed participants to collaboratively build a global machine learning model without disclosing private datasets to each other. We consider an FL setting in which there may exist both a) semi-honest participants who aim to eavesdrop on other participants’ private datasets; and b) Byzantine participants who aim to degrade the performances of the global model by submitting detrimental model updates. The proposed framework leverages the Expectation-Maximization algorithm first in E-step to estimate unknown participant membership, respectively, of Byzantine and benign participants, and in M-step to optimize the global model performance by excluding malicious model updates uploaded by Byzantine participants. One novel feature of the proposed method, which facilitates reliable detection of Byzantine participants even with HE or MPC protections, is to estimate participant membership based on the performances of a set of randomly generated candidate models evaluated by all participants. The extensive experiments and theoretical analysis demonstrate that our framework guarantees Byzantine Fault-tolerance in various federated learning settings with private-preserving mechanisms.
科研通智能强力驱动
Strongly Powered by AbleSci AI