计算机科学
人工智能
语义学(计算机科学)
自然语言处理
图形
对偶(语法数字)
机器学习
理论计算机科学
文学类
艺术
程序设计语言
作者
Kaifang Dong,Yifan Liu,Fuyong Xu,Peiyu Liu
出处
期刊:IEEE Intelligent Systems
[Institute of Electrical and Electronics Engineers]
日期:2023-07-01
卷期号:38 (4): 10-19
被引量:3
标识
DOI:10.1109/mis.2023.3268228
摘要
Text classification is a fundamental and central position in natural language processing. There are many solutions to the text classification problem, but few use the semantic combination of multiple perspectives to improve the classification performance. The paper proposes a dual-channel attention network model called DCAT, which uses the complementarity between semantics to refine the understanding deficit. Specifically, DCAT first captures the logical semantics of the text through transductive learning and graph structure. Then, at the attention fusion layer (Channel), we use the logical semantics to perform joint semantic training on other semantics to correct the predictions of unlabeled test data incrementally. Experiments show that DCAT can achieve more accurate classification on a wide range of text classification datasets, which is vital for subsequent text mining tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI