作者
Xiaoyang Wang,Zhe Zhou,Zhihang Yuan,Jingchen Zhu,Y. Cao,Yao Zhang,Kangrui Sun,Guangyu Sun
摘要
In the emerging edge-computing scenarios, FPGAs have been widely adopted to accelerate convolutional neural network (CNN)–based image-processing applications, such as image classification, object detection, and image segmentation, and so on. A standard image-processing pipeline first decodes the collected compressed images from Internet of Things (IoTs) to RGB data, then feeds them into CNN engines to compute the results. Previous works mainly focus on optimizing the CNN inference parts. However, we notice that on the popular ZYNQ FPGA platforms, image decoding can also become the bottleneck due to the poor performance of embedded ARM CPUs. Even with a hardware accelerator, the decoding operations still incur considerable latency. Moreover, conventional RGB-based CNNs have too few input channels at the first layer, which can hardly utilize the high parallelism of CNN engines and greatly slows down the network inference. To overcome these problems, in this article, we propose FD-CNN, a novel CNN accelerator leveraging the partial-decoding technique to accelerate CNNs directly in the frequency domain. Specifically, we omit the most time-consuming IDCT (Inverse Discrete Cosine Transform) operations of image decoding and directly feed the DCT coefficients (i.e., the frequency data) into CNNs. By this means, the image decoder can be greatly simplified. Moreover, compared to the RGB data, frequency data has a narrower input resolution but has 64× more channels. Such an input shape is more hardware friendly than RGB data and can substantially reduce the CNN inference time. We then systematically discuss the algorithm, architecture, and command set design of FD-CNN. To deal with the irregularity of different CNN applications, we propose an image-decoding-aware design-space exploration (DSE) workflow to optimize the pipeline. We further propose an early stopping strategy to tackle the time-consuming progressive JPEG decoding. Comprehensive experiments demonstrate that FD-CNN achieves, on average, 3.24×, 4.29× throughput improvement, 2.55×, 2.54× energy reduction and 2.38×, 2.58× lower latency on ZC-706 and ZCU-102 platforms, respectively, compared to the baseline image-processing pipelines.