计算机科学
可扩展性
人工神经网络
消息传递
图形
人工智能
采样(信号处理)
算法
理论计算机科学
分布式计算
滤波器(信号处理)
数据库
计算机视觉
作者
Weichen Zhao,Tong Guo,Xianxiang Yu,Congying Han
标识
DOI:10.1016/j.neunet.2023.03.015
摘要
With the development of graph neural networks, how to handle large-scale graph data has become an increasingly important topic. Currently, most graph neural network models which can be extended to large-scale graphs are based on random sampling methods. However, the sampling process in these models is detached from the forward propagation of neural networks. Moreover, quite a few works design sampling based on statistical estimation methods for graph convolutional networks and the weights of message passing in GCNs nodes are fixed, making these sampling methods not scalable to message passing networks with variable weights, such as graph attention networks. Noting the end-to-end learning capability of neural networks, we propose a learnable sampling method. It solves the problem that random sampling operations cannot calculate gradients and samples nodes with an unfixed probability. In this way, the sampling process is dynamically combined with the forward propagation process of the features, allowing for better training of the networks. And it can be generalized to all message passing models. In addition, we apply the learnable sampling method to GNNs and propose two models. Our method can be flexibly combined with different graph neural network models and achieves excellent accuracy on benchmark datasets with large graphs. Meanwhile, loss function converges to smaller values at a faster rate during training than past methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI