计算机科学
门票
修剪
边缘计算
GSM演进的增强数据速率
计算机安全
信息隐私
隐私保护
计算机网络
理论计算机科学
人工智能
农学
生物
作者
Yifan Shi,Kang Wei,Li Shen,Jun Li,Xueqian Wang,Bo Yuan,Song Guo
标识
DOI:10.1109/tmc.2024.3370967
摘要
Federated learning (FL) can train collaboratively with several mobile terminals (MTs), which faces critical challenges in communication, resource, and privacy. Existing privacy-preserving methods usually adopt instance-level differential privacy (DP), which provides a rigorous privacy guarantee but with several bottlenecks: performance degradation, transmission overhead, and resource constraints. Therefore, we propose Fed-LTP, an efficient and privacy-enhanced FL framework with L ottery T icket H ypothesis (LTH) and zero-concentrated D P (zCDP). It generates a pruned global model on the server side and conducts sparse-to-sparse training from scratch with zCDP on the client side. On the server side, two pruning schemes are proposed: (i) the weight-based pruning (LTH) determines the pruned global model structure; (ii) the iterative pruning further shrinks the size of the pruned model. Meanwhile, the performance of Fed-LTP is boosted via model validation based on the Laplace mechanism. On the client side, we use sparse-to-sparse training to solve the resource-constraints issue and provide tighter privacy analysis to reduce the privacy budget. We evaluate the effectiveness of Fed-LTP on several real-world datasets in both independent and identically distributed (IID) and non-IID settings. The results confirm the superiority of Fed-LTP over state-of-the-art (SOTA) methods in communication, computation, and memory efficiencies while realizing a better utility-privacy trade-off.
科研通智能强力驱动
Strongly Powered by AbleSci AI