卷积神经网络
量化(信号处理)
计算机科学
人工智能
算法
作者
Federico G. Zacchigna,Sergio E. Lew,Ariel Lutenberg
出处
期刊:Electronics
[Multidisciplinary Digital Publishing Institute]
日期:2024-05-14
卷期号:13 (10): 1923-1923
被引量:5
标识
DOI:10.3390/electronics13101923
摘要
This work focuses on the efficient quantization of convolutional neural networks (CNNs). Specifically, we introduce a method called non-uniform uniform quantization (NUUQ), a novel quantization methodology that combines the benefits of non-uniform quantization, such as high compression levels, with the advantages of uniform quantization, which enables an efficient implementation in fixed-point hardware. NUUQ is based on decoupling the quantization levels from the number of bits. This decoupling allows for a trade-off between the spatial and temporal complexity of the implementation, which can be leveraged to further reduce the spatial complexity of the CNN, without a significant performance loss. Additionally, we explore different quantization configurations and address typical use cases. The NUUQ algorithm demonstrates the capability to achieve compression levels equivalent to 2 bits without an accuracy loss and even levels equivalent to ∼1.58 bits, but with a loss in performance of only ∼0.6%.
科研通智能强力驱动
Strongly Powered by AbleSci AI