计算机科学
人工智能
移动设备
分布式计算
机器学习
启发式
方案(数学)
人机交互
数学分析
数学
操作系统
作者
P Wang,Tao Ouyang,Qiong Wu,Qianyi Huang,Jie Gong,Xu Chen
标识
DOI:10.1016/j.sysarc.2023.103052
摘要
Federated Learning (FL) has recently received extensive attention in enabling privacy-preserving edge AI services for Human Activity Recognition (HAR). However, users’ mobile and wearable devices in the HAR scenario usually possess dramatically different computing capability and diverse data distributions, making it very challenging for such heterogeneous HAR devices to conduct effective collaborative training (co-training) with the traditional FL schemes. To address this issue, we present Hydra, a Hybrid-model federated learning mechanism that facilitates the co-training among heterogeneous devices by allowing them to train models that well fit their own computing capability. Specifically, Hydra leverages BranchyNet to design a large-small global hybrid-model and enables heterogeneous devices to train the proper parts of the model tailored to their computing capability. Hydra drives co-training among the devices and clusters them based on model similarity to mitigate the impact of HAR data heterogeneity on model accuracy. In order to deal with the issue that large model may lack sufficient training data due to the limited number of high-performance devices in FL, we introduce a pairing scheme between high and low performance devices for effective co-training, and further propose sample selection approach to select more valuable samples to participate in co-training. We then formulate a constrained co-training problem within a cluster that is proved to be NP-hard and devise a fast greedy-based heuristic algorithm to solve it. In addition, to address the low accuracy of small models, we also propose a Large-to-Small knowledge distillation algorithm for resource-constrained devices to optimize the efficiency of transferring knowledge from large models to small models. We conduct extensive experiments on three HAR datasets and the experimental results demonstrate the superior performance of Hydra for achieving outstanding model accuracy improvement compared with other state-of-the-art schemes.
科研通智能强力驱动
Strongly Powered by AbleSci AI