计算机科学
联合学习
个性化
直觉
原始数据
数据共享
信息隐私
服务器
数据建模
人工智能
数据科学
机器学习
万维网
计算机安全
数据库
医学
哲学
替代医学
认识论
病理
程序设计语言
作者
Guangsheng Zhang,Bo Liu,Tianqing Zhu,Ming Ding,Wanlei Zhou
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2024-01-30
卷期号:11 (11): 19380-19393
被引量:3
标识
DOI:10.1109/jiot.2024.3360153
摘要
Federated learning is a distributed learning paradigm where a global model is trained using data samples from multiple clients but without the necessity of sharing raw data samples. However, it comes with several significant challenges in system designs, data quality, and communications. Recent research highlights a significant concern related to data privacy leakage through reserve-engineering model gradients at a malicious server. Moreover, a global model cannot provide good utility performance for individual clients when the local training data is heterogeneous in terms of quantity, quality, and distribution. Hence, personalized federated learning is highly desirable in practice to tailor the trained model for local usage. In this paper, we propose PPFed, a unified federated learning framework to simultaneously address privacy preservation and personalization. The intuition of our framework is to learn part of the model gradients at the server and the rest of the gradients at the local clients. To evaluate the effectiveness of the proposed framework, we conduct extensive experiments across four image classification datasets to show that our framework yields better privacy and personalization performance compared to the existing methods. We also claim that privacy preservation and personalization are essentially two facets of deep learning models, offering a unique perspective on their intrinsic interrelation.
科研通智能强力驱动
Strongly Powered by AbleSci AI