计算机科学
服务器
人工神经网络
能源消耗
高效能源利用
稳健性(进化)
延迟(音频)
边缘设备
单点故障
分布式计算
移动设备
尖峰神经网络
火车
人工智能
云计算
计算机网络
电信
操作系统
地理
化学
工程类
电气工程
基因
生物
地图学
生物化学
生态学
作者
Ons Aouedi,Kandaraj Piamrat,Mario Südholt
标识
DOI:10.1145/3616390.3618288
摘要
Federated Learning (FL) has emerged in edge computing to address privacy concerns in mobile networks. It allows the mobile devices to collaboratively train a model while keeping training data where they were generated. However, in practice, it suffers from several issues such as (i) robustness, due to a single point of failure, (ii) latency, as it requires a significant amount of communication resources, and (iii) convergence, due to system and statistical heterogeneity. To cope with these issues, Hierarchical FL (HFL) has been proposed as a promising alternative. HFL adds the edge servers as an intermediate layer for sub-model aggregation, several iterations will be performed before the global aggregation at the cloud server takes place, thus making the overall process more efficient, especially with non-independent and identically distributed (non-IID) data. Moreover, using traditional Artificial Neural Networks (ANNs) with HFL consumes a significant amount of energy, further hindering the application of decentralized FL on energy-constrained mobile devices. Therefore, this paper presents HFedSNN: an energy-efficient and fast-convergence model by incorporating Spike Neural Networks (SNNs) within HFL. SNN is a generation of neural networks, which promises tremendous energy and computation efficiency improvements. Taking advantage of HFL and SNN, numerical results demonstrate that HFedSNN outperforms FL with SNN (FedSNN) in terms of accuracy and communication overhead by 4.48% and 26×, respectively. Furthermore, HFedSNN significantly reduces energy consumption by 4.3× compared to FL with ANN (FedANN).
科研通智能强力驱动
Strongly Powered by AbleSci AI