计算机科学
可扩展性
加速
架空(工程)
计算
隐藏物
分布式计算
图形
编码(内存)
并行计算
理论计算机科学
人工智能
算法
数据库
程序设计语言
作者
Mingyu Guan,Anand Iyer,Taesoo Kim
标识
DOI:10.1145/3534540.3534691
摘要
In this paper, we present DynaGraph, a system that supports dynamic Graph Neural Networks (GNNs) efficiently. Based on the observation that existing proposals for dynamic GNN architectures combine techniques for structural and temporal information encoding independently, DynaGraph proposes novel techniques that enable cross optimizations across these tasks. It uses cached message passing and timestep fusion to significantly reduce the overhead associated with dynamic GNN processing. It further proposes a simple distributed data-parallel dynamic graph processing strategy that enables scalable dynamic GNN computation. Our evaluation of DynaGraph on a variety of dynamic GNN architectures and use cases shows a speedup of up to 2.7X compared to existing state-of-the-art frameworks.
科研通智能强力驱动
Strongly Powered by AbleSci AI