Marek Żyliński,Amir Nassibi,Ildar Rakhmatulin,Adil Malik,Christos Papavassiliou,Danilo P. Mandic
出处
期刊:IEEE Transactions on Circuits and Systems Ii-express Briefs [Institute of Electrical and Electronics Engineers] 日期:2023-11-27卷期号:71 (3): 1738-1743被引量:3
标识
DOI:10.1109/tcsii.2023.3336831
摘要
Artificial intelligence (AI) on an edge device has enormous potential, including advanced signal filtering, event detection, optimization in communications and data compression, improving device performance, advanced on-chip process control, and enhancing energy efficiency. In this tutorial, we provide a brief overview of AI deployment on edge devices, and describe the process of building and deploying a neural network model on a digital edge device. The primary challenge when deploying an AI model in circuits is to fit the model within the constraints of the limited resources as the restricted memory capacity on IoT circuits and the finite computational power impose constraints on the utilization of deep neural networks on IoT. We address this issue by elucidating methods for optimizing neural network models. Part of the tutorial also covers the deployment of deep neural network on logic circuits, as significantly enhanced computational speed can be attained by transitioning the AI paradigm from neural networks to learning automata algorithms. This shift involves a move from arithmetic-based calculations to logic-based approaches. This transformation facilitates the deployment of AI onto Field-Programmable Gate Arrays (FPGAs). The last part of the tutorial covers the emerging topic of in-memory computation of the multiply-accumulate operation. Transferring computations to analog memories has the potential to improve speed and energy efficiency compared to digital architectures, potentially achieving improvements of several orders of magnitude. It is our hope that this tutorial will assist researchers and engineers to integrate AI models on edge devices, facilitating rapid and reliable implementation.