计算机科学
工作量
图形
现场可编程门阵列
硬件加速
并行计算
分布式计算
理论计算机科学
嵌入式系统
操作系统
作者
Tong Geng,Ang Li,Runbin Shi,Chunshu Wu,Tianqi Wang,Yanfei Li,Pouya Haghi,Antonino Tumeo,Shuai Che,Steve Reinhardt,Martin C. Herbordt
标识
DOI:10.1109/micro50266.2020.00079
摘要
Deep learning systems have been successfully applied to Euclidean data such as images, video, and audio. In many applications, however, information and their relationships are better expressed with graphs. Graph Convolutional Networks (GCNs) appear to be a promising approach to efficiently learn from graph data structures, having shown advantages in many critical applications. As with other deep learning modalities, hardware acceleration is critical. The challenge is that real-world graphs are often extremely large and unbalanced; this poses significant performance demands and design challenges. In this paper, we propose Autotuning-Workload-Balancing GCN (AWB-GCN) to accelerate GCN inference. To address the issue of workload imbalance in processing real-world graphs, three hardware-based autotuning techniques are proposed: dynamic distribution smoothing, remote switching, and row remapping. In particular, AWB-GCN continuously monitors the sparse graph pattern, dynamically adjusts the workload distribution among a large number of processing elements (up to 4K PEs), and, after converging, reuses the ideal configuration. Evaluation is performed using an Intel D5005 FPGA with five commonly-used datasets. Results show that 4K-PE AWB-GCN can significantly elevate PE utilization by 7.7× on average and demonstrate considerable performance speedups over CPUs (3255×), GPUs (80.3×), and a prior GCN accelerator (5.1×).
科研通智能强力驱动
Strongly Powered by AbleSci AI