计算机科学
架空(工程)
协议(科学)
联合学习
计算机网络
人工智能
医学
替代医学
病理
操作系统
作者
Shiwei Lu,Ruihu Li,Wenbin Liu,Chaofeng Guan,Xiaopeng Yang
标识
DOI:10.1016/j.cose.2022.102993
摘要
The proposal of federated learning solves problems of data silos and privacy protection in the field of artificial intelligence. However, privacy attacks can infer or reconstruct sensitive information from the submitted gradient, which causes users' privacy leakage in federated learning. Secure aggregation (SecAgg) protocol can protect users' privacy while completing federated learning tasks, but it incurs significant communication overhead and wall clock training time on large-scale model training task. Thus, it is difficult to apply SecAgg in bandwidth-limited federated applications. Recently, Rand-k sparsification with secure aggregation (Rand-k SparseSecAgg) was proposed to optimize SecAgg protocol, while its optimization of communication overhead and training time is limited. In this paper, we replace Rand-k sparsification with Top-k sparsification, and design a Top-k sparsification with secure aggregation (Top-k SparseSecAgg) protocol for privacy-preserving federated learning to further reduce communication overhead and wall clock training time. In addition, we optimize the proposed protocol by assigning clients to different groups in the logical layer, which reduces the upper limit of compression ratio and practical communication overhead in Top-k SparseSecAgg. Experiments demonstrate that Top-k SparseSecAgg can reduce communication overhead by 6.25× as compared to SecAgg, 3.78× as compared to Rand-k SparseSecAgg, and reduce wall clock training time 1.43× as compared to SecAgg and 1.13× as compared to Rand-k SparseSecAgg. Thus, our protocol is more suitable in bandwidth-limited federated applications to protect privacy and complete learning task.
科研通智能强力驱动
Strongly Powered by AbleSci AI