计算机科学
上传
MNIST数据库
移动设备
架空(工程)
任务(项目管理)
联合学习
方案(数学)
人工智能
移动电话技术
移动计算
分布式计算
机器学习
深度学习
计算机网络
移动无线电
万维网
数学分析
数学
管理
经济
操作系统
作者
Xinglin Zhang,Zhaojing Ou,Zheng Yang
出处
期刊:IEEE Transactions on Network Science and Engineering
[Institute of Electrical and Electronics Engineers]
日期:2023-02-20
卷期号:10 (4): 2358-2371
被引量:2
标识
DOI:10.1109/tnse.2023.3246463
摘要
Federated learning (FL) is a privacy-preserving collaborative learning framework that can be used in mobile computing where multiple user devices jointly train a deep learning model without uploading their data to a centralized server. An essential issue of FL is to reduce the significant communication overhead during training. Existing FL schemes mostly address this issue regarding single task learning. However, each user generally has multiple related tasks on the mobile device such as multi-content recommendation, and traditional FL schemes need to train an individual model per task which consumes a substantial number of resources. In this work, we formulate an FL problem with multiple personalized tasks, which aims to minimize the communication cost in learning different personalized tasks on each device. To solve the formulated problem, we incorporate multi-task learning into FL which trains a model for multiple tasks concurrently and propose an FL framework named FedMPT. FedMPT modifies the efficient acceleration algorithm and quantization compression strategy delicately to achieve superior performance regarding the communication efficiency. We implement and evaluate FedMPT on two datasets, Multi-MNIST and CelebA, in the FL environment. Experimental results show that FedMPT significantly outperforms the traditional FL scheme considering communication cost and average accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI