计算机科学
文本图
人工智能
图形
自然语言处理
光学(聚焦)
文本挖掘
机器学习
理论计算机科学
光学
物理
作者
Yizhao Wang,Chenxi Wang,Jun Zhan,Wenjun Ma,Yuncheng Jiang
标识
DOI:10.1016/j.eswa.2023.119658
摘要
Text classification as a fundamental task in Natural Language Processing (NLP). Graph neural networks can better handle the large amount of information in text, and effective and fast graph models for text classification have received much attention. Besides, most methods are transductive learning, which means they cannot handle the documents with new words and relations. To tackle these problems, we propose a novel method for Text Classification by Fusing Contextual Information via Graph Neural Networks (TextFCG). Concretely, we first construct a single graph for all words in each text and label the edges by fusing its various contextual relations. Our text graph contains different information of documents and enhances the connectivity of graph by introducing more typed edges, which improves the learning effect of GNN. Then, based on GNN and gated recurrent unit (GRU), our model can interact the local words with global text information and enhance the sequential representation of nodes. Moreover, we focus on contextual features from the text itself. Extensive experiments on several benchmark datasets and detailed analysis prove the effectiveness of our proposed method on the text classification task.
科研通智能强力驱动
Strongly Powered by AbleSci AI