Accelerating Sampling and Aggregation Operations in GNN Frameworks with GPU Initiated Direct Storage Accesses

计算机科学 并行计算 采样(信号处理) 计算机数据存储 数据库 计算机硬件 计算机视觉 滤波器(信号处理)
作者
Jeongmin Park,Vikram Sharma Mailthody,Zaid Qureshi,Wen‐mei Hwu
出处
期刊:Proceedings of the VLDB Endowment [VLDB Endowment]
卷期号:17 (6): 1227-1240 被引量:1
标识
DOI:10.14778/3648160.3648166
摘要

Graph Neural Networks (GNNs) are emerging as a powerful tool for learning from graph-structured data and performing sophisticated inference tasks in various application domains. Although GNNs have been shown to be effective on modest-sized graphs, training them on large-scale graphs remains a significant challenge due to the lack of efficient storage access and caching methods for graph data. Existing frameworks for training GNNs use CPUs for graph sampling and feature aggregation, while the training and updating of model weights are executed on GPUs. However, our in-depth profiling shows CPUs cannot achieve the graph sampling and feature aggregation throughput required to keep up with GPUs. Furthermore, when the graph and its embeddings do not fit in the CPU memory, the overhead introduced by the operating system, say for handling page-faults, causes gross under-utilization of hardware and prolonged end-to-end execution time. To address these issues, we propose the GPU Initiated Direct Storage Access (GIDS) dataloader, to enable GPU-oriented GNN training for large-scale graphs while efficiently utilizing all hardware resources, such as CPU memory, storage, and GPU memory. The GIDS dataloader first addresses memory capacity constraints by enabling GPU threads to directly fetch feature vectors from storage. Then, we introduce a set of innovative solutions, including the dynamic storage access accumulator, constant CPU buffer, and GPU software cache with window buffering, to balance resource utilization across the entire system for improved end-to-end training throughput. Our evaluation using a single GPU on terabyte-scale GNN datasets shows that the GIDS dataloader accelerates the overall DGL GNN training pipeline by up to 582× when compared to the current, state-of-the-art DGL dataloader.

科研通智能强力驱动
Strongly Powered by AbleSci AI

祝大家在新的一年里科研腾飞
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
嘿嘿应助Krastal采纳,获得10
2秒前
4秒前
嘿嘿应助dlh采纳,获得10
5秒前
8秒前
失眠班完成签到,获得积分20
8秒前
8秒前
AAAA完成签到,获得积分10
10秒前
10秒前
11秒前
失眠班发布了新的文献求助10
12秒前
李健应助momo采纳,获得10
12秒前
郭强应助文件撤销了驳回
12秒前
骑龙猪猪完成签到,获得积分10
13秒前
zr93完成签到 ,获得积分10
14秒前
感性的俊驰完成签到 ,获得积分10
14秒前
852应助ll采纳,获得10
19秒前
20秒前
波妞发布了新的文献求助10
20秒前
zydong发布了新的文献求助10
21秒前
momo发布了新的文献求助10
24秒前
SC完成签到,获得积分10
24秒前
26秒前
JUNE完成签到 ,获得积分10
27秒前
27秒前
29秒前
29秒前
ll发布了新的文献求助10
33秒前
懋懋发布了新的文献求助30
36秒前
benlaron完成签到,获得积分10
38秒前
星辰大海应助东方傲儿采纳,获得10
40秒前
丘比特应助梁寒采纳,获得10
43秒前
dahafei发布了新的文献求助50
43秒前
zydong完成签到,获得积分10
44秒前
调皮的巧凡完成签到,获得积分10
44秒前
尼卡完成签到 ,获得积分10
45秒前
46秒前
47秒前
深情安青应助ym采纳,获得10
47秒前
chihopiaodu250完成签到,获得积分10
47秒前
香菜完成签到,获得积分10
49秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Les Mantodea de guyane 2500
Signals, Systems, and Signal Processing 510
Discrete-Time Signals and Systems 510
Driving under the influence: Epidemiology, etiology, prevention, policy, and treatment 500
生活在欺瞒的年代:傅树介政治斗争回忆录 260
Functional Analysis 200
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5872602
求助须知:如何正确求助?哪些是违规求助? 6490870
关于积分的说明 15669578
捐赠科研通 4989963
什么是DOI,文献DOI怎么找? 2690095
邀请新用户注册赠送积分活动 1632616
关于科研通互助平台的介绍 1590486