FxP-QNet: A Post-Training Quantizer for the Design of Mixed Low-Precision DNNs With Dynamic Fixed-Point Representation

量化(信号处理) 计算机科学 推论 计算机工程 深度学习 人工神经网络 浮点型 深层神经网络 水准点(测量) 计算 人工智能 算法 机器学习 大地测量学 地理
作者
Ahmad Shawahna,Sadiq M. Sait,Aiman H. El‐Maleh,Irfan Ahmad
出处
期刊:IEEE Access [Institute of Electrical and Electronics Engineers]
卷期号:10: 30202-30231 被引量:6
标识
DOI:10.1109/access.2022.3157893
摘要

Deep neural networks (DNNs) have demonstrated their effectiveness in a wide range of computer vision tasks, with the state-of-the-art results obtained through complex and deep structures that require intensive computation and memory. Now-a-days, efficient model inference is crucial for consumer applications on resource-constrained platforms. As a result, there is much interest in the research and development of dedicated deep learning (DL) hardware to improve the throughput and energy efficiency of DNNs. Low-precision representation of DNN data-structures through quantization would bring great benefits to specialized DL hardware. However, the rigorous quantization leads to a severe accuracy drop. As such, quantization opens a large hyper-parameter space at bit-precision levels, the exploration of which is a major challenge. In this paper, we propose a novel framework referred to as the Fixed-Point Quantizer of deep neural Networks (FxP-QNet) that flexibly designs a mixed low-precision DNN for integer-arithmetic-only deployment. Specifically, the FxP-QNet gradually adapts the quantization level for each data-structure of each layer based on the trade-off between the network accuracy and the low-precision requirements. Additionally, it employs post-training self-distillation and network prediction error statistics to optimize the quantization of floating-point values into fixed-point numbers. Examining FxP-QNet on state-of-the-art architectures and the benchmark ImageNet dataset, we empirically demonstrate the effectiveness of FxP-QNet in achieving the accuracy-compression trade-off without the need for training. The results show that FxP-QNet-quantized AlexNet, VGG-16, and ResNet-18 reduce the overall memory requirements of their full-precision counterparts by 7.16x, 10.36x, and 6.44x with less than 0.95%, 0.95%, and 1.99% accuracy drop, respectively.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
会飞的猪完成签到 ,获得积分20
刚刚
西风漂流应助标致过客2025采纳,获得10
刚刚
彭静琳发布了新的文献求助10
1秒前
nojmli发布了新的文献求助10
1秒前
一一得一发布了新的文献求助10
2秒前
white完成签到 ,获得积分10
2秒前
迪迦发布了新的文献求助10
2秒前
3秒前
浮浮世世发布了新的文献求助10
4秒前
田様应助哈哈采纳,获得10
4秒前
Naruto完成签到,获得积分10
4秒前
小二郎应助fuckkk采纳,获得10
4秒前
灵巧的雅琴完成签到,获得积分10
5秒前
5秒前
ja完成签到 ,获得积分10
6秒前
kivenlzs发布了新的文献求助10
6秒前
自然思松完成签到,获得积分10
7秒前
Rowena发布了新的文献求助10
7秒前
量子星尘发布了新的文献求助10
7秒前
arrebol发布了新的文献求助10
8秒前
ggboom发布了新的文献求助10
8秒前
顺利的八宝粥完成签到 ,获得积分10
10秒前
一一得一完成签到,获得积分10
10秒前
迪迦完成签到,获得积分20
10秒前
liao应助神途采纳,获得20
10秒前
11秒前
11秒前
11秒前
独云发布了新的文献求助10
15秒前
lelouch发布了新的文献求助30
15秒前
15秒前
15秒前
CipherSage应助huangruyu采纳,获得10
16秒前
一树发布了新的文献求助10
17秒前
挣扎毕业的中年少女完成签到 ,获得积分10
18秒前
18秒前
19秒前
19秒前
20秒前
江祁完成签到 ,获得积分10
21秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
List of 1,091 Public Pension Profiles by Region 1001
Latent Class and Latent Transition Analysis: With Applications in the Social, Behavioral, and Health Sciences 500
On the application of advanced modeling tools to the SLB analysis in NuScale. Part I: TRACE/PARCS, TRACE/PANTHER and ATHLET/DYN3D 500
L-Arginine Encapsulated Mesoporous MCM-41 Nanoparticles: A Study on In Vitro Release as Well as Kinetics 500
Washback Research in Language Assessment:Fundamentals and Contexts 400
Haematolymphoid Tumours (Part A and Part B, WHO Classification of Tumours, 5th Edition, Volume 11) 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5469254
求助须知:如何正确求助?哪些是违规求助? 4572366
关于积分的说明 14335510
捐赠科研通 4499281
什么是DOI,文献DOI怎么找? 2464986
邀请新用户注册赠送积分活动 1453533
关于科研通互助平台的介绍 1428051