计算机科学
变压器
图形
理论计算机科学
建筑
表现力
人工智能
工程类
电气工程
艺术
视觉艺术
电压
作者
Chengxuan Ying,Tianle Cai,Shengjie Luo,Shuxin Zheng,Guolin Ke,Di He,Yanming Shen,Tie‐Yan Liu
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:95
标识
DOI:10.48550/arxiv.2106.05234
摘要
The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer.
科研通智能强力驱动
Strongly Powered by AbleSci AI