静态随机存取存储器
计算机科学
三元运算
高效能源利用
过程(计算)
二进制数
能源消耗
计算机硬件
算术
操作系统
数学
电气工程
工程类
程序设计语言
作者
Nameun Kang,Hyungjun Kim,Hyunmyung Oh,Jae‐Joon Kim
标识
DOI:10.1145/3489517.3530574
摘要
Recently, various in-memory computing accelerators for low precision neural networks have been proposed. While in-memory Binary Neural Network (BNN) accelerators achieved significant energy efficiency, BNNs show severe accuracy degradation compared to their full precision counterpart models. To mitigate the problem, we propose TAIM, an in-memory computing hardware that can support ternary activation with negligible hardware overhead. In TAIM, a 6T SRAM cell can compute the multiplication between ternary activation and binary weight. Since the 6T SRAM cell consumes no energy when the input activation is 0, the proposed TAIM hardware can achieve even higher energy efficiency compared to BNN case by exploiting input 0's. We fabricated the proposed TAIM hardware in 28nm CMOS process and evaluated the energy efficiency on various image classification benchmarks. The experimental results show that the proposed TAIM hardware can achieve ~ 3.61× higher energy efficiency on average compared to previous designs which support ternary activation.
科研通智能强力驱动
Strongly Powered by AbleSci AI