计算机科学
边缘设备
利用
GSM演进的增强数据速率
架空(工程)
启发式
人工神经网络
过程(计算)
边缘计算
人工智能
联合学习
机器学习
方案(数学)
建筑
分布式计算
数据挖掘
视觉艺术
艺术
数学分析
数学
操作系统
云计算
计算机安全
作者
Feifei Zhang,Jidong Ge,Chifong Wong,Sheng Zhang,Chuanyi Li,Bin Luo
标识
DOI:10.1109/globecom46510.2021.9685909
摘要
To exploit the vast amount of distributed data across edge devices, Federated Learning (FL) has been proposed to learn a shared model by performing distributed training locally on participating devices and aggregating the local models into a global one. The existing FL algorithms suffer from accuracy loss due to that data samples across all devices are usually not independent and identically distributed (non-i.i.d.). Besides, devices might lose connection during the training process in wireless edge computing. Thus, we advocate one-shot Neural Architecture Search technique as a basis to propose a solution which can deal with non-i.i.d. problem and is robust to the intermittent connection issue. We adopt a large network as the global model which includes all the candidate network architectures. The non-i.i.d. problem is alleviated by two steps: (1) identify and train the candidate networks which are potentially high performance and trained with less bias using a heuristic sampling scheme; (2) search for the final model with the highest accuracy rate from the candidate networks. Experimental results show that the model trained by our proposed method is robust to non-i.i.d. problem and can achieve 84% reduced communication overhead compared with the baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI