计算机科学
特征(语言学)
图形
人工智能
特征学习
节点(物理)
机器学习
卷积神经网络
模式识别(心理学)
理论计算机科学
数据挖掘
语言学
结构工程
工程类
哲学
作者
Zheng Jin,Yan Wang,Wanjun Xu,Zilu Gan,Ping Li,Jiancheng Lv
标识
DOI:10.1016/j.neucom.2020.07.098
摘要
Graph convolutional network (GCN) has been proved to be an effective framework for graph-based semi-supervised learning applications. The core operation block of GCN is the convolutional layer, which enables the network to construct node embeddings by fusing both attributes of nodes and relationships between nodes. Different features or feature interactions inherently have various influences on the convolutional layers. However, there are very limited studies about the impact of feature importance in GCN-related communities. In this work, we attempt to augment convolutional layers in GCNs with statistical attention-based feature importance by modeling the latent interactions of features, which is complementary to the standard GCNs and only needs simple calculations with statistics rather than heavy trainings. To this end, we treat the feature input of each convolutional layer as a separate multi-layer heterogeneous graph, and propose Graph Statistical Self-Attention (GSSA) method to automatically learn the hierarchical structure of feature importance. More specifically, we propose two modules in GSSA, Channel-wise Self-Attention (CSA) to capture the dependencies between feature channels, and Mean-based Self-Attention (MSA) to reweight similarities among features. Aiming at each graph convolutional layer, GSSA can be applied in a "plug and play" way for a wide range of GCN variants. To the best of our knowledge, this is the first implementation that optimizes GCNs from the feature importance perspective. Extensive experiments demonstrate that GSSA can promote existing popular baselines remarkably in semi-supervised node classification tasks. We further employ multiple qualitative evaluations to get deep insights into our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI