计算机科学
稳健性(进化)
独立同分布随机变量
还原(数学)
架空(工程)
边缘设备
信息丢失
GSM演进的增强数据速率
财产(哲学)
数据挖掘
算法
机器学习
人工智能
随机变量
云计算
数学
操作系统
认识论
统计
哲学
基因
化学
生物化学
几何学
作者
Xueyu Wu,Xin Yao,Cho-Li Wang
出处
期刊:IEEE Transactions on Parallel and Distributed Systems
[Institute of Electrical and Electronics Engineers]
日期:2020-01-01
卷期号:: 1-1
被引量:34
标识
DOI:10.1109/tpds.2020.3046250
摘要
Federated Learning allows edge devices to collaboratively train a shared model on their local data without leaking user privacy. The non-independent-and-identically-distributed (Non-IID) property of data distribution, which leads to severe accuracy degradation, and enormous communication overhead for aggregating parameters should be tackled in federated learning. In this article, we conduct a detailed analysis of parameter updates on the Non-IID datasets and compare the difference with the IID setting. Experimental results exhibit that parameter update matrices are structure-sparse and show that more gradients could be identified as negligible updates on the Non-IID data. As a result, we propose a structure-based communication reduction algorithm, called FedSCR, that reduces the number of parameters transported through the network while maintaining the model accuracy. FedSCR aggregates the parameter updates over channels and filters, identifies and removes the redundant updates by comparing the aggregated values with a threshold. Unlike the traditional structured pruning methods, FedSCR retains the complete model that does not require to be retrained and fine-tuned. The local loss and weight divergence on each device vary a lot because of the unbalanced data distribution. We further propose an adaptive FedSCR, that dynamically changes the bounded threshold, to enhance the model robustness on the Non-IID data. Evaluation results show that our proposed strategies achieve almost 50 percent upstream communication reduction without loss of accuracy. FedSCR can be integrated into state-of-the-art federated learning algorithms to dramatically reduce the number of parameters pushed to the global server with a tolerable accuracy reduction.
科研通智能强力驱动
Strongly Powered by AbleSci AI