计算机科学
正确性
人工智能
加速
灵活性(工程)
x86个
深层神经网络
机器学习
人工神经网络
并行计算
程序设计语言
软件
数学
统计
作者
Gianluca Mittone,Walter Riviera,Iacopo Colonnelli,Robert Birke,Marco Aldinucci
标识
DOI:10.1007/978-3-031-39698-4_26
摘要
Since its debut in 2016, Federated Learning (FL) has been tied to the inner workings of Deep Neural Networks (DNNs); this allowed its development as DNNs proliferated but neglected those scenarios in which using DNNs is not possible or advantageous. The fact that most current FL frameworks only support DNNs reinforces this problem. To address the lack of non-DNN-based FL solutions, we propose MAFL (Model-Agnostic Federated Learning). MAFL merges a model-agnostic FL algorithm, AdaBoost.F, with an open industry-grade FL framework: Intel® OpenFL. MAFL is the first FL system not tied to any machine learning model, allowing exploration of FL beyond DNNs. We test MAFL from multiple points of view, assessing its correctness, flexibility, and scaling properties up to 64 nodes of an HPC cluster. We also show how we optimised OpenFL achieving a 5.5 $$\times $$ speedup over a standard FL scenario. MAFL is compatible with x86-64, ARM-v8, Power and RISC-V.
科研通智能强力驱动
Strongly Powered by AbleSci AI