MNIST数据库
计算机科学
推论
人工神经网络
人工智能
非易失性存储器
电子工程
计算机工程
计算机体系结构
嵌入式系统
计算机硬件
工程类
作者
Xiaoyu Sun,Panni Wang,Kai Ni,Suman Datta,Shimeng Yu
标识
DOI:10.1109/iedm.2018.8614611
摘要
In-memory computing with analog non-volatile memories (NVMs) can accelerate both the in-situ training and inference of deep neural networks (DNNs) by parallelizing multiply-accumulate (MAC) operations in the analog domain. However, the in-situ training accuracy suffers from unacceptable degradation due to undesired weight-update asymmetry/nonlinearity and limited bit precision. In this work, we overcome this challenge by introducing a compact Ferroelectric FET (FeFET) based synaptic cell that exploits hybrid precision for in-situ training and inference. We propose a novel hybrid approach where we use modulated "volatile" gate voltage of FeFET to represent the least significant bits (LSBs) for symmetric/linear update during training only, and use "non-volatile" polarization states of FeFET to hold the information of most significant bits (MSBs) for inference. This design is demonstrated by the experimentally validated FeFET SPICE model and cosimulation with the TensorFlow framework. The results show that with the proposed 6-bit and 7-bit synapse design, the insitu training accuracy can achieve ~97.3% on MNIST dataset and ~87% on CIFAR-10 dataset, respectively, approaching the ideal software based training.
科研通智能强力驱动
Strongly Powered by AbleSci AI