计算机科学
邻接矩阵
人工智能
文本图
分类器(UML)
模式识别(心理学)
图形
特征提取
邻接表
编码器
理论计算机科学
算法
自动汇总
操作系统
作者
Sheping Zhai,Wenqing Zhang,Dabao Cheng,Xiaoxia Bai
标识
DOI:10.1145/3573942.3573963
摘要
Extracting and representing text features is the most important part of text classification. Aiming at the problem of incomplete feature extraction in traditional text classification methods, a text classification model based on graph convolution neural network and attention mechanism is proposed. Firstly, the text is input into BERT (Bi-directional Encoder Representations from Transformers) model to obtain the word vector representation, the context semantic information of the given text is learned by the BiGRU (Bi-directional Gated Recurrent Unit), and the important information is screened by attention mechanism and used as node features. Secondly, the dependency syntax diagram and the corresponding adjacency matrix of the input text are constructed. Thirdly, the GCN (Graph Convolution Neural Network) is used to learn the node features and adjacency matrix. Finally, the obtained text features are input into the classifier for text classification. Experiments on two datasets show that the proposed model achieves a good classification effect, and better accuracy is achieved in comparison with baseline models.
科研通智能强力驱动
Strongly Powered by AbleSci AI