计算机科学
推论
联合学习
人机交互
万维网
多媒体
分布式计算
人工智能
作者
Yangguang Cui,Zhixing Zhang,Nuo Wang,Liying Li,Chun-Wei Chang,Tongquan Wei
标识
DOI:10.1109/tc.2023.3327513
摘要
Deep learning as a service (DLaaS) that promotes deep learning-based applications by selling computing services from IT companies to end-users has introduced potential privacy leaks from users and cloud servers. Federated learning (FL) provides an emerging distributed paradigm that enables numerous users to collaboratively train deep-learning models while protecting user privacy and data security. However, many FL-related existing works only focus on improving communication bottlenecks due to frequent model parameter transmission, but ignore the performance degradation incurred by imbalanced user distribution and high inference latency due to the high complexity of deep-learning models in the emerging IoT-edge-cloud FL. In this paper, we propose an efficient user-distribution-aware hierarchical FL for communication-efficient training and fast inference in the IoT-edge-cloud DLaaS architecture. Specifically, we propose a user-distribution-aware hierarchical FL architecture to cope with the performance degradation owing to the imbalanced user distribution. The proposed architecture also features a lightweight deep neural network that adopts the designed lightweight fire modules as components and has a side branch for communication-efficient training and fast inference. Extensive experiments demonstrate that the proposed schemes significantly boost the accuracy by up to 67.12%, save 47.98% communication costs, and accelerate inference by up to 87.24 $\boldsymbol{\times}$ compared to benchmarking methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI