计算机科学
单点故障
块链
差别隐私
原始数据
分布式学习
联合学习
数据共享
过程(计算)
分布式计算
计算机安全
服务器
人工智能
数据挖掘
万维网
操作系统
病理
医学
程序设计语言
替代医学
教育学
心理学
作者
Chuan Ma,Jun Li,Long Shi,Ming Ding,Taotao Wang,Zhu Han,H. Vincent Poor
出处
期刊:IEEE Computational Intelligence Magazine
[Institute of Electrical and Electronics Engineers]
日期:2022-07-19
卷期号:17 (3): 26-33
被引量:94
标识
DOI:10.1109/mci.2022.3180932
摘要
Motivated by the increasingly powerful computing capabilities of end-user equipment, and by the growing privacy concerns over sharing sensitive raw data, a distributed machine learning paradigm known as federated learning (FL) has emerged. By training models locally at each client and aggregating learning models at a central server, FL has the capability to avoid sharing data directly, thereby reducing privacy leakage. However, the conventional FL framework relies heavily on a single central server, and it may fail if such a server behaves maliciously. To address this single point of failure, in this work, a blockchain-assisted decentralized FL framework is investigated, which can prevent malicious clients from poisoning the learning process, and thus provides a self-motivated and reliable learning environment for clients. In this framework, the model aggregation process is fully decentralized and the tasks of training for FL and mining for blockchain are integrated into each participant. Privacy and resource-allocation issues are further investigated in the proposed framework, and a critical and unique issue inherent in the proposed framework is disclosed. In particular, a lazy client can simply duplicate models shared by other clients to reap benefits without contributing its resources to FL. To address these issues, analytical and experimental results are provided to shed light on possible solutions, i.e., adding noise to achieve local differential privacy and using pseudo-noise (PN) sequences as watermarks to detect lazy clients.
科研通智能强力驱动
Strongly Powered by AbleSci AI