计算机科学
可解释性
自然语言处理
词(群论)
代表(政治)
人工智能
构造(python库)
文件分类
过程(计算)
情报检索
语言学
程序设计语言
法学
哲学
操作系统
政治
政治学
作者
Roberta Akemi Sinoara,José Camacho-Collados,Rafael Geraldeli Rossi,Roberto Navigli,Solange Oliveira Rezende
标识
DOI:10.1016/j.knosys.2018.10.026
摘要
Accurate semantic representation models are essential in text mining applications. For a successful application of the text mining process, the text representation adopted must keep the interesting patterns to be discovered. Although competitive results for automatic text classification may be achieved with traditional bag of words, such representation model cannot provide satisfactory classification performances on hard settings where richer text representations are required. In this paper, we present an approach to represent document collections based on embedded representations of words and word senses. We bring together the power of word sense disambiguation and the semantic richness of word- and word-sense embedded vectors to construct embedded representations of document collections. Our approach results in semantically enhanced and low-dimensional representations. We overcome the lack of interpretability of embedded vectors, which is a drawback of this kind of representation, with the use of word sense embedded vectors. Moreover, the experimental evaluation indicates that the use of the proposed representations provides stable classifiers with strong quantitative results, especially in semantically-complex classification scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI