计算机科学
梯度下降
GSM演进的增强数据速率
光学(聚焦)
机器学习
分布式计算
在线机器学习
人工智能
边缘计算
边缘设备
互联网
数据挖掘
主动学习(机器学习)
人工神经网络
云计算
万维网
物理
光学
操作系统
作者
Shiqiang Wang,Tiffany Tuor,Theodoros Salonidis,Kin K. Leung,Christian Makaya,Ting He,Kevin Chan
标识
DOI:10.1109/jsac.2019.2904348
摘要
Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge.Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events.Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location.In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place.Our focus is on a generic class of machine learning models that are trained using gradientdescent based approaches.We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget.The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment.The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.
科研通智能强力驱动
Strongly Powered by AbleSci AI