计算机科学
任务(项目管理)
联合学习
分布式计算
机器学习
人工智能
容错
任务分析
分布式学习
心理学
教育学
经济
管理
作者
Virginia Smith,Chun‐Ying Chiang,Maziar Sanjabi,Ameet Talwalkar
出处
期刊:Neural Information Processing Systems
日期:2017-12-04
卷期号:30: 4427-4437
被引量:312
摘要
Federated learning poses new statistical and systems challenges in training machine learning models over distributed networks of devices. In this work, we show that multi-task learning is naturally suited to handle the statistical challenges of this setting, and propose a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues. Our method and theory for the first time consider issues of high communication cost, stragglers, and fault tolerance for distributed multi-task learning. The resulting method achieves significant speedups compared to alternatives in the federated setting, as we demonstrate through simulations on real-world federated datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI