计算机科学
理论计算机科学
图形
消息传递
归纳偏置
分布式计算
多任务学习
管理
经济
任务(项目管理)
作者
Ali Behrouz,Farnoosh Hashemi
标识
DOI:10.1145/3637528.3672044
摘要
Graph Neural Networks (GNNs) have shown promising potential in graph representation learning. The majority of GNNs define a local message-passing mechanism, propagating information over the graph by stacking multiple layers. These methods, however, are known to suffer from two major limitations: over-squashing and poor capturing of long-range dependencies. Recently, Graph Transformers (GTs) emerged as a powerful alternative to Message-Passing Neural Networks (MPNNs). GTs, however, have quadratic computational cost, lack inductive biases on graph structures, and rely on complex Positional Encodings (PE). In this paper, we show that while Transformers, complex message-passing, and PE are sufficient for good performance in practice, neither is necessary. Motivated by the recent success of State Space Models (SSMs), we present Graph Mamba Networks (GMNs), a framework for a new class of GNNs based on selective SSMs. We discuss the new challenges when adapting SSMs to graph-structured data, and present four required steps to design GMNs, where we choose (1) Neighborhood Tokenization, (2) Token Ordering, (3) Architecture of SSM Encoder, and (4) Local Encoding. We provide theoretical justification for the power of GMNs, and experimentally show that GMNs attain an outstanding performance in various benchmark datasets. The code is available in this link.
科研通智能强力驱动
Strongly Powered by AbleSci AI