计算机科学
差别隐私
水准点(测量)
服务器
熵(时间箭头)
方案(数学)
联合学习
数据挖掘
机器学习
人工智能
计算机安全
计算机网络
量子力学
数学
大地测量学
物理
数学分析
地理
作者
Yinbin Miao,Wei Zheng,Xinghua Li,Hongwei Li,Kim–Kwang Raymond Choo,Robert H. Deng
标识
DOI:10.1109/tifs.2023.3282574
摘要
Federated Learning (FL) has been widely used in various fields such as financial risk control, e-government and smart healthcare. To protect data privacy, many privacy-preserving FL approaches have been designed and implemented in various scenarios. However, existing works incur high communication burdens on clients, and affect the training model accuracy due to non-Independently and Identically Distributed (non-IID) data samples separately owned by clients. To solve these issues, in this paper we propose a secure Model-Contrastive Federated Learning with improved Compressive Sensing (MCFL-CS) scheme, motivated by contrastive learning. We combine model-contrastive loss and cross-entropy loss to design the local network architecture of our scheme, which can alleviate the impact of data heterogeneity on model accuracy. Then we utilize improved compressive sensing and local differential privacy to reduce communication costs and prevent clients' privacy leakage. The formal security analysis shows that our scheme satisfies (ε,δ)-differential privacy. And extensive experiments using five benchmark datasets demonstrate that our scheme improves the model accuracy by 3.45% on average of all datasets under the non-IID setting and reduces the communication costs by more than 95%, when compared with FedAvg.
科研通智能强力驱动
Strongly Powered by AbleSci AI