嵌入
图嵌入
理论计算机科学
计算机科学
特征学习
图形
拓扑图论
人工智能
电压图
折线图
作者
Bo Jiang,Leiling Wang,Jian Cheng,Jin Tang,Bin Luo
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2023-08-01
卷期号:34 (8): 3925-3938
被引量:1
标识
DOI:10.1109/tnnls.2021.3120100
摘要
Compact representation of graph data is a fundamental problem in pattern recognition and machine learning area. Recently, graph neural networks (GNNs) have been widely studied for graph-structured data representation and learning tasks, such as graph semi-supervised learning, clustering, and low-dimensional embedding. In this article, we present graph propagation-embedding networks (GPENs), a new model for graph-structured data representation and learning problem. GPENs are mainly motivated by 1) revisiting of traditional graph propagation techniques for graph node context-aware feature representation and 2) recent studies on deeply graph embedding and neural network architecture. GPENs integrate both feature propagation on graph and low-dimensional embedding simultaneously into a unified network using a novel propagation-embedding architecture. GPENs have two main advantages. First, GPENs can be well-motivated and explained from feature propagation and deeply learning architecture. Second, the equilibrium representation of the propagation-embedding operation in GPENs has both exact and approximate formulations, both of which have simple closed-form solutions. This guarantees the compactivity and efficiency of GPENs. Third, GPENs can be naturally extended to multiple GPENs (M-GPENs) to address the data with multiple graph structures. Experiments on various semi-supervised learning tasks on several benchmark datasets demonstrate the effectiveness and benefits of the proposed GPENs and M-GPENs.
科研通智能强力驱动
Strongly Powered by AbleSci AI