计算机科学
掷骰子
GSM演进的增强数据速率
工作量
边缘设备
选择(遗传算法)
分布式计算
计算
机器学习
人工智能
云计算
算法
几何学
数学
操作系统
作者
Rituparna Saha,Sudip Misra,Aishwariya Chakraborty,Chandranath Chatterjee,Pallav Kumar Deb
出处
期刊:IEEE Transactions on Parallel and Distributed Systems
[Institute of Electrical and Electronics Engineers]
日期:2023-02-01
卷期号:34 (2): 675-686
被引量:11
标识
DOI:10.1109/tpds.2022.3217271
摘要
This work presents an efficient data-centric client selection approach, named DICE, to enable federated learning (FL) over distributed edge networks. Prior research focused on assessing the computation and communication ability of the client devices for selection in FL. On-device data quality, in terms of data volume and heterogeneity, across these distributed devices is largely overlooked. The obvious outcome is the selection of an improper subset of clients with poor-quallity data, which inevitably results in an inefficient trained model. With an aim to address this problem, in this work, we design DICE which prioritizes the data quality of the client devices in the selection phase, in addition to their computation and communication abilities, to improve the accuracy of FL. Additionally, in DICE, we introduce the assistance of vicinal edge devices to account for the lack of computation or communication abilities in certain devices without violating the privacy-preserving guarantees of FL. Towards this aim, we propose a scheme to decide the optimal edge device, in terms of latency and workload, to be selected as the helper device. The experimental results show that DICE improves convergence speed for a given level of model accuracy. Further, the simulation results show that DICE reduces delay by at least 16%, energy consumption by at least 17%, and packet loss by at least 55% compared to the existing benchmarks while prioritizing the on-device data quality across clients.
科研通智能强力驱动
Strongly Powered by AbleSci AI