计算机科学
特征学习
图形
变压器
图形核
人工智能
理论计算机科学
无监督学习
机器学习
核方法
支持向量机
物理
量子力学
电压
变核密度估计
作者
Lixiang Xu,Haifeng Liu,Qingzhe Cui,Bin Luo,Ning Li,Yan Chen,Yuanyan Tang
标识
DOI:10.1109/ijcnn54540.2023.10192010
摘要
This paper mainly studies graph representation learning in unsupervised scenarios combined with Transformer models. Transformer network models have been widely used in many fields of machine learning and deep learning, and the application of transformer architectures to graph data has been very popular recently. For graph data, the field of graph representation learning has recently attracted a lot of attention. Graph-level representation is widely used in the real world, such as drug molecule design and disease classification in biochemistry. Traditional graph kernel methods, which design different graph kernels for different substructures, are simple but have poor generalization performance. Recently methods based on language models, such as graph2vec, use a particular substructure as the graph representation, which is also similar to the hand-crafted approach and also leads to poor generalization ability. In this paper, we propose the UGTransformer model, which builds on the standard Transformer architecture. We introduce several simple and effective structural encoding methods in order to encode the structural information of the graph into the model efficiently. The unsupervised representation of graphs is learned through a multi-headed attention mechanism and by using powerful aggregation functions. We conducted experiments on a benchmark date set for graph classification, and the experimental results validate the effectiveness of our proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI