计算机科学
新闻聚合器
方案(数学)
推论
加密
服务器
信息隐私
同态加密
分布式计算
计算机安全
人工智能
计算机网络
数学分析
数学
操作系统
作者
Hong-Yen Tran,Jiankun Hu,Xuefei Yin,H. R. Pota
标识
DOI:10.1109/tifs.2023.3267892
摘要
Federated Learning is a prominent machine learning paradigm which helps tackle data privacy issues by allowing clients to store their raw data locally and transfer only their local model parameters to an aggregator server to collaboratively train a shared global model. However, federated learning is vulnerable to inference attacks from dishonest aggregators who can infer information about clients' training data from their model parameters. To deal with this issue, most of the proposed schemes in literature either require a non-colluded server setting, a trusted third-party to compute master secret keys or a secure multiparty computation protocol which is still inefficient over multiple iterations of computing an aggregation model. In this work, we propose an efficient cross-silo federated learning scheme with strong privacy preservation. By designing a double-layer encryption scheme which has no requirement to compute discrete logarithm, utilizing secret sharing only at the establishment phase and in the iterations when parties rejoin, and accelerating the computation performance via parallel computing, we achieve an efficient privacy-preserving federated learning protocol, which also allows clients to dropout and rejoin during the training process. The proposed scheme is demonstrated theoretically and empirically to provide provable privacy against an honest-but-curious aggregator server and simultaneously achieve desirable model utilities. The scheme is applied to false data injection attack detection (FDIA) in smart grids. This is a more secure cross-silo FDIA federated learning resilient to the local private data inference attacks than the existing works.
科研通智能强力驱动
Strongly Powered by AbleSci AI