计算机科学
尖峰神经网络
人工神经网络
机器学习
人工智能
杠杆(统计)
高效能源利用
电气工程
工程类
作者
Yeshwanth Venkatesha,Youngeun Kim,Leandros Tassiulas,Priyadarshini Panda
标识
DOI:10.1109/tsp.2021.3121632
摘要
As neural networks get widespread adoption in resource-constrained embedded devices, there is a growing need for low-power neural systems. Spiking Neural Networks (SNNs) are emerging to be an energy-efficient alternative to the traditional Artificial Neural Networks (ANNs) which are known to be computationally intensive. From an application perspective, as federated learning involves multiple energy-constrained devices, there is a huge scope to leverage energy efficiency provided by SNNs. Despite its importance, there has been little attention on training SNNs on a large-scale distributed system like federated learning. In this paper, we bring SNNs to a more realistic federated learning scenario. Specifically, we design a federated learning method for training decentralized and privacy preserving SNNs. To validate the proposed method, we experimentally evaluate the advantages of SNNs on various aspects of federated learning with CIFAR10 and CIFAR100 benchmarks. We observe that SNNs outperform ANNs in terms of overall accuracy by over 15% when the data is distributed across a large number of clients in the federation while providing up to $4.3\times$ energy efficiency. In addition to efficiency, we also analyze the sensitivity of the proposed federated SNN framework to data distribution among the clients, stragglers, and gradient noise and perform a comprehensive comparison with ANNs. The source code is available at https://github.com/Intelligent-Computing-Lab-Yale/FedSNN .
科研通智能强力驱动
Strongly Powered by AbleSci AI