计算机科学
联合学习
蒸馏
机器学习
人工智能
集成学习
分类器(UML)
训练集
集合预报
约束(计算机辅助设计)
数据挖掘
工程类
机械工程
有机化学
化学
作者
Tao Lin,Long Kong,Sebastian U. Stich,Martin Jäggi
出处
期刊:Cornell University - arXiv
日期:2020-06-12
被引量:38
摘要
Federated Learning (FL) is a machine learning setting where many devices collaboratively train a machine learning model while keeping the training data decentralized. In most of the current training schemes the central model is refined by averaging the parameters of the server model and the updated parameters from the client side. However, directly averaging model parameters is only possible if all models have the same structure and size, which could be a restrictive constraint in many scenarios.
In this work we investigate more powerful and more flexible aggregation schemes for FL. Specifically, we propose ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients. This knowledge distillation technique mitigates privacy risk and cost to the same extent as the baseline FL algorithms, but allows flexible aggregation over heterogeneous client models that can differ e.g. in size, numerical precision or structure. We show in extensive empirical experiments on various CV/NLP datasets (CIFAR-10/100, ImageNet, AG News, SST2) and settings (heterogeneous models/data) that the server model can be trained much faster, requiring fewer communication rounds than any existing FL technique so far.
科研通智能强力驱动
Strongly Powered by AbleSci AI