Spike(软件开发)
计算机科学
尖峰神经网络
边缘设备
异步通信
计算
卷积神经网络
推论
炸薯条
GSM演进的增强数据速率
人工神经网络
能源消耗
能量(信号处理)
高效能源利用
人工智能
还原(数学)
并行计算
计算机工程
算法
工程类
数学
电信
软件工程
电气工程
操作系统
统计
云计算
几何学
作者
Jilin Zhang,Dexuan Huo,Jian Zhang,Chunqi Qian,Qi Liu,Liyang Pan,Zhihua Wang,Ning Qiao,Kea‐Tiong Tang,Hong Chen
标识
DOI:10.1109/isscc42615.2023.10067650
摘要
With the development of on-chip learning processors for edge-AI applications, energy efficiency of NN inference and training is more and more critical. As on-chip training energy dominates the energy consumption of edge-AI processors [1], [2], [4], [5], reduction is of paramount importance. Spiking neural networks (SNNs) offer energy-efficient inference and learning compared with convolutional neural networks (CNNs) or deepneural networks (DNNs), but SNN-based processors have three challenges that need to be addressed (Fig. 22.6.1). 1) During on-chip training, some factors involved in ΔW computation are zeros resulting in ΔW=O, leading to redundant ΔW computation and memory access for weight update. 2) After reaching a certain accuracy, more data cannot improve the accuracy significantly, and 95% of the energy is wasted on the unnecessary processing of the input spike events afterwards. 3) In the case of sparse input-spike events, the number of spike events in each time step is different. If spike processing is synchronized by time step, the worst-case scenario needs to be considered. As a result, energy and time are wasted.
科研通智能强力驱动
Strongly Powered by AbleSci AI