Min Soo Kim,Alberto A. Del Barrio,Leonardo Tavares Oliveira,R. Hermida,Nader Bagherzadeh
出处
期刊:IEEE Transactions on Computers [Institute of Electrical and Electronics Engineers] 日期:2018-11-12卷期号:68 (5): 660-675被引量:96
标识
DOI:10.1109/tc.2018.2880742
摘要
This paper proposes energy-efficient approximate multipliers based on the Mitchell’s log multiplication, optimized for performing inferences on convolutional neural networks (CNN). Various design techniques are applied to the log multiplier, including a fully-parallel LOD, efficient shift amount calculation, and exact zero computation. Additionally, the truncation of the operands is studied to create the customizable log multiplier that further reduces energy consumption. The paper also proposes using the one’s complements to handle negative numbers, as an approximation of the two’s complements that had been used in the prior works. The viability of the proposed designs is supported by the detailed formal analysis as well as the experimental results on CNNs. The experiments also provide insights into the effect of approximate multiplication in CNNs, identifying the importance of minimizing the range of error.The proposed customizable design at $w$w = 8 saves up to 88 percent energy compared to the exact fixed-point multiplier at 32 bits with just a performance degradation of 0.2 percent for the ImageNet ILSVRC2012 dataset.