计算机科学
服务器
Byzantine容错
强化学习
计算机网络
边缘设备
分布式计算
块链
边缘计算
计算机安全
人工智能
GSM演进的增强数据速率
容错
云计算
操作系统
作者
Zhanpeng Yang,Yuanming Shi,Yong Zhou,Zixin Wang,Kai Yang
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:10 (1): 92-109
被引量:17
标识
DOI:10.1109/jiot.2022.3201117
摘要
The safety-critical scenarios of artificial intelligence (AI), such as autonomous driving, Internet of Things, smart healthcare, etc., have raised critical requirements of trustworthy AI to guarantee the privacy and security with reliable decisions. As a nascent branch for trustworthy AI, federated learning (FL) has been regarded as a promising privacy preserving framework for training a global AI model over collaborative devices. However, security challenges still exist in the FL framework, e.g., Byzantine attacks from malicious devices, and model tampering attacks from malicious server, which will degrade or destroy the accuracy of trained global AI model. In this article, we shall propose a decentralized blockchain-based FL (B-FL) architecture by using a secure global aggregation algorithm to resist malicious devices, and deploying a practical Byzantine fault tolerance consensus protocol with high effectiveness and low energy consumption among multiple edge servers to prevent model tampering from the malicious server. However, to implement B-FL system at the network edge, multiple rounds of cross-validation in blockchain consensus protocol will induce long training latency. We thus formulate a network optimization problem that jointly considers bandwidth and power allocation for the minimization of long-term average training latency consisting of progressive learning rounds. We further propose to transform the network optimization problem as a Markov decision process and leverage the deep reinforcement learning (DRL)-based algorithm to provide high system performance with low computational complexity. Simulation results demonstrate that B-FL can resist malicious attacks from edge devices and servers, and the training latency of B-FL can be significantly reduced by the DRL-based algorithm compared with the baseline algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI