计算机科学
嵌入
图嵌入
图形
理论计算机科学
文字嵌入
人工智能
作者
Zhigang Sun,Lie Wang,Junqiang Sun
标识
DOI:10.1016/j.neucom.2023.03.053
摘要
Graph embedding aims at learning continuous vector representations for graphs which is crucial for graph analytics. Natural Language Process (NLP)-based graph embedding methods build corpus for graph data by treating substructures as words and then use NLP models to learn graph embeddings. However, the size difference and data redundancy among substructures are less explored in the built corpora. To mitigate this problem, we propose an unsupervised multi-scale graph embedding method. To be a specific, we first build multiple graph corpora for a graph dataset, where each corpus only contains substructures of specific granularity. Then, we extend a document embedding model to each graph corpus to obtain graph embeddings of different scales. At last, we obtain the final multi-scale embedding of a graph by pooling its multiple embeddings. Comprehensive experiments on real graph datasets indicate that the proposed method obtains competitive results with state-of-the-arts, and is superior to some classic graph kernels and graph embedding methods on six out of ten benchmark datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI