计算机科学
MNIST数据库
异步通信
服务器
效率低下
单点故障
边缘计算
GSM演进的增强数据速率
骨料(复合)
分布式计算
边缘设备
点(几何)
趋同(经济学)
人工智能
深度学习
计算机网络
操作系统
云计算
复合材料
经济增长
经济
数学
微观经济学
材料科学
几何学
作者
Yinghui Liu,Youyang Qu,Chenhao Xu,Zhicheng Hao,Bruce Gu
出处
期刊:Sensors
[MDPI AG]
日期:2021-05-11
卷期号:21 (10): 3335-3335
被引量:49
摘要
The fast proliferation of edge computing devices brings an increasing growth of data, which directly promotes machine learning (ML) technology development. However, privacy issues during data collection for ML tasks raise extensive concerns. To solve this issue, synchronous federated learning (FL) is proposed, which enables the central servers and end devices to maintain the same ML models by only exchanging model parameters. However, the diversity of computing power and data sizes leads to a significant difference in local training data consumption, and thereby causes the inefficiency of FL. Besides, the centralized processing of FL is vulnerable to single-point failure and poisoning attacks. Motivated by this, we propose an innovative method, federated learning with asynchronous convergence (FedAC) considering a staleness coefficient, while using a blockchain network instead of the classic central server to aggregate the global model. It avoids real-world issues such as interruption by abnormal local device training failure, dedicated attacks, etc. By comparing with the baseline models, we implement the proposed method on a real-world dataset, MNIST, and achieve accuracy rates of 98.96% and 95.84% in both horizontal and vertical FL modes, respectively. Extensive evaluation results show that FedAC outperforms most existing models.
科研通智能强力驱动
Strongly Powered by AbleSci AI