静态随机存取存储器
计算机科学
可扩展性
架空(工程)
CMOS芯片
噪声裕度
延迟(音频)
工艺变化
炸薯条
边距(机器学习)
并行计算
嵌入式系统
计算机硬件
过程(计算)
电子工程
电压
晶体管
工程类
电气工程
机器学习
操作系统
数据库
电信
作者
Qing Dong,Mahmut E. Sinangil,Burak Erbagci,Dar Sun,Win-San Khwa,Hung-Jen Liao,Yih Wang,Jonathan Chang
标识
DOI:10.1109/isscc19947.2020.9062985
摘要
Compute-in-memory (CIM) parallelizes multiply-and-average (MAV) computations and reduces off-chip weight access to reduce energy consumption and latency, specifically for Al edge devices. Prior CIM approaches demonstrated tradeoffs for area, noise margin, process variation and weight precision. 6T SRAM [1]–[3] provides the smallest cell area for CIM, but cell stability limits the number of activated cells, resulting in low parallelization. 10T and twin-8T [4]–[5] isolate the read/write paths for noise margin improvement, however both require special design of the bit cell using logic layout rules, resulting in over a 2x area overhead compared to foundry yield-optimized 6T SRAMs. Furthermore, single-bit precision of weights, in prior work [1]–[4], cannot meet the requirement for high-precision operations and scalability for large neural networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI