差别隐私
计算机科学
接头(建筑物)
计算机安全
数据挖掘
工程类
建筑工程
作者
Lefeng Zhang,Tianqing Zhu,Ping Xiong,Wanlei Zhou,Philip S. Yu
出处
期刊:IEEE Transactions on Knowledge and Data Engineering
[Institute of Electrical and Electronics Engineers]
日期:2022-01-04
卷期号:35 (4): 3333-3346
被引量:49
标识
DOI:10.1109/tkde.2021.3140131
摘要
Federated learning is a promising distributed machine learning paradigm that has been playing a significant role in providing privacy-preserving learning solutions. However, alongside all its achievements, there are also limitations. First, traditional frameworks assume that all the clients are voluntary and so will want to participate in training only for improving the model's accuracy. However, in reality, clients usually want to be adequately compensated for the data and resources they will use before participating. Second, today's frameworks do not offer sufficient protection against malicious participants who try to skew a jointly trained model with poisoned updates. To address these concerns, we have developed a more robust federated learning scheme based on joint differential privacy. The framework provides two game-theoretic mechanisms to motivate clients to participate in training. These mechanisms are dominant-strategy truthful, individual rational, and budget-balanced. Further, the influence an adversarial client can have is quantified and restricted, and data privacy is similarly guaranteed in quantitative terms. Experiments with different training models on real-word datasets demonstrate the effectiveness of the proposed approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI