静态随机存取存储器
计算机科学
CMOS芯片
计算机硬件
人工神经网络
并行计算
按位运算
数字信号处理
乘法(音乐)
宏
算术
电子工程
人工智能
工程类
数学
程序设计语言
组合数学
作者
Bonan Yan,Jeng-Long Hsu,Pang-Cheng Yu,Chia‐Chi Lee,Yaojun Zhang,Wenshuo Yue,Guoqiang Mei,Yuchao Yang,Yue Yang,Hai Li,Yiran Chen,Ru Huang
标识
DOI:10.1109/isscc42614.2022.9731545
摘要
Advanced intelligent embedded systems perform cognitive tasks with highly-efficient vector-processing units for deep neural network (DNN) inference and other vector-based signal processing using limited power. SRAM-based compute-in-memory (CIM) achieves high energy efficiency for vector-matrix multiplications, offers <1ns read/write speed, and saves vastly repeating memory accesses. However, prior SRAM CIM macros require a large area for compute circuits (either using ADC for analog CIM [1– 4] or CMOS static logic for all-digital CIM [5–6]), have limited CIM functions, and use fixed vector-processing dimensions that cause a low-spatial-utilization rate when deploying DNN (Fig. 11.7.1).
科研通智能强力驱动
Strongly Powered by AbleSci AI