神经形态工程学
卷积(计算机科学)
材料科学
像素
比例(比率)
计算机硬件
计算机体系结构
计算科学
计算机科学
人工智能
人工神经网络
物理
量子力学
作者
Xianghong Zhang,Di Liu,Jianxin Wu,Enping Cheng,Congyao Qin,Changsong Gao,Liuting Shan,Yi Zou,Yuanyuan Hu,Tailiang Guo,Huipeng Chen
标识
DOI:10.1002/adfm.202420045
摘要
Abstract For convolution neural networks, increasing the performance of hardware computer systems is crucial in the era of big data. Benefiting from the neuromorphic devices, producing the convolutional calculation at the crossbar array circuit has become a promising approach for high‐performance hardware computer systems. However, as computation scales, this hardware system faces the challenge of low resource utilization efficiency and low power efficiency. Here, a novel pixel‐level strategy, leveraging the dynamic change of electron concentration as the process of convolution calculation, is proposed for high‐performance hardware computer systems. Compared with the crossbar array circuit‐based strategy, instead of at least four devices, raised the power efficiency to 413% and decreased the training epochs to 38%. This work presents a novel physics‐based approach that enables highly efficient convolutional calculation, improves power efficiency, speeds up convergency, and paves the way for building complex convolution neural networks with large‐scale convolutional computation capabilities.
科研通智能强力驱动
Strongly Powered by AbleSci AI