计算机科学
GSM演进的增强数据速率
边缘设备
边缘计算
计算机体系结构
个性化学习
分布式计算
人工智能
人机交互
万维网
云计算
操作系统
数学教育
合作学习
开放式学习
教学方法
数学
作者
Kaibin Wang,Qiang He,Feifei Chen,Chunyang Chen,Faliang Huang,Hai Jin,Yun Yang
标识
DOI:10.1145/3543507.3583347
摘要
Mobile and Web-of-Things (WoT) devices at the network edge account for more than half of the world's web traffic, making a great data source for various machine learning (ML) applications, particularly federated learning (FL) which offers a promising solution to privacy-preserving ML feeding on these data. FL allows edge mobile and WoT devices to train a shared global ML model under the orchestration of a central parameter server. In the real world, due to resource heterogeneity, these edge devices often train different versions of models (e.g., VGG-16 and VGG-19) or different ML models (e.g., VGG and ResNet) for the same ML task (e.g., computer vision and speech recognition). Existing FL schemes have assumed that participating edge devices share a common model architecture, and thus cannot facilitate FL across edge devices with heterogeneous ML model architectures. We explored this architecture heterogeneity challenge and found that FL can and should accommodate these edge devices to improve model accuracy and accelerate model training. This paper presents our findings and FlexiFed, a novel scheme for FL across edge devices with heterogeneous model architectures, and three model aggregation strategies for accommodating architecture heterogeneity under FlexiFed. Experiments with four widely-used ML models on four public datasets demonstrate 1) the usefulness of FlexiFed; and 2) that compared with the state-of-the-art FL scheme, FlexiFed improves model accuracy by 2.6%-9.7% and accelerates model convergence by 1.24 × -4.04 ×.
科研通智能强力驱动
Strongly Powered by AbleSci AI