计算机科学
一般化
推论
个性化
机器学习
客户端
趋同(经济学)
GSM演进的增强数据速率
人工智能
钥匙(锁)
边缘设备
任务(项目管理)
服务器
云计算
计算机网络
计算机安全
万维网
数学分析
数学
管理
经济
经济增长
操作系统
作者
Dong-Jun Han,Do-Yeon Kim,Minseok Choi,Christopher G. Brinton,Jaekyun Moon
标识
DOI:10.1109/infocom53939.2023.10229027
摘要
A fundamental challenge to providing edge-AI services is the need for a machine learning (ML) model that achieves personalization (i.e., to individual clients) and generalization (i.e., to unseen data) properties concurrently. Existing techniques in federated learning (FL) have encountered a steep tradeoff between these objectives and impose large computational requirements on edge devices during training and inference. In this paper, we propose SplitGP, a new split learning solution that can simultaneously capture generalization and personalization capabilities for efficient inference across resource-constrained clients (e.g., mobile/IoT devices). Our key idea is to split the full ML model into client-side and server-side components, and impose different roles to them: the client-side model is trained to have strong personalization capability optimized to each client's main task, while the server-side model is trained to have strong generalization capability for handling all clients' out-of-distribution tasks. We analytically characterize the convergence behavior of SplitGP, revealing that all client models approach stationary points asymptotically. Further, we analyze the inference time in SplitGP and provide bounds for determining model split ratios. Experimental results show that SplitGP outperforms existing baselines by wide margins in inference time and test accuracy for varying amounts of out-of-distribution samples.
科研通智能强力驱动
Strongly Powered by AbleSci AI