Boosting(机器学习)
计算机科学
人工智能
作者
Dahai Tang,Jiali Wang,Rong Chen,Lei Wang,Wenyuan Yu,Jingren Zhou,Kenli Li
出处
期刊:Proceedings of the VLDB Endowment
[VLDB Endowment]
日期:2024-01-01
卷期号:17 (5): 1105-1118
被引量:1
标识
DOI:10.14778/3641204.3641219
摘要
GPUs are commonly utilized to accelerate GNN training, particularly on a multi-GPU server with high-speed interconnects (e.g., NVLink and NVSwitch). However, the rapidly increasing scale of graphs poses a challenge to applying GNN to real-world applications, due to limited GPU memory. This paper presents XGNN, a multi-GPU GNN training system that fully utilizes system memory (e.g., GPU and host memory), as well as high-speed interconnects. The core design of XGNN is the Global GNN Memory Store (GGMS), which abstracts underlying resources to provide a unified memory store for GNN training. It partitions hybrid input data, including graph topological and feature data, across both GPU and host memory. GGMS also provides easy-to-use APIs for GNN applications to access data transparently, forwarding data access requests to the actual physical data partitions automatically. Evaluation on various multi-GPU platforms using three common GNN models with four large-scale datasets shows that XGNN outperforms DGL, Quiver and DGL+C by up to 7.9X (from 2.3X), 15.7X (from 3.3X) and 2.8X (from 1.3X), respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI