计算机科学
可扩展性
随机梯度下降算法
卷积神经网络
图形
人工智能
水准点(测量)
数学证明
一般化
随机图
机器学习
理论计算机科学
人工神经网络
数学
几何学
大地测量学
数据库
地理
数学分析
作者
Changqin Huang,Ming Li,Feilong Cao,Hamido Fujita,Zhao Li,Xindong Wu
标识
DOI:10.1109/tpami.2022.3183143
摘要
Graph Convolutional Networks (GCNs), as a prominent example of graph neural networks, are receiving extensive attention for their powerful capability in learning node representations on graphs. There are various extensions, either in sampling and/or node feature aggregation, to further improve GCNs' performance, scalability and applicability in various domains. Still, there is room for further improvements on learning efficiency because performing batch gradient descent using the full dataset for every training iteration, as unavoidable for training (vanilla) GCNs, is not a viable option for large graphs. The good potential of random features in speeding up the training phase in large-scale problems motivates us to consider carefully whether GCNs with random weights are feasible. To investigate theoretically and empirically this issue, we propose a novel model termed Graph Convolutional Networks with Random Weights (GCN-RW) by revising the convolutional layer with random filters and simultaneously adjusting the learning objective with regularized least squares loss. Theoretical analyses on the model's approximation upper bound, structure complexity, stability and generalization, are provided with rigorous mathematical proofs. The effectiveness and efficiency of GCN-RW are verified on semi-supervised node classification task with several benchmark datasets. Experimental results demonstrate that, in comparison with some state-of-the-art approaches, GCN-RW can achieve better or matched accuracies with less training time cost.
科研通智能强力驱动
Strongly Powered by AbleSci AI