计算机科学
降维
空间化
维数之咒
水准点(测量)
人工智能
主题模型
集合(抽象数据类型)
语料库
机器学习
数据挖掘
自然语言处理
模式识别(心理学)
大地测量学
社会学
人类学
程序设计语言
地理
作者
Daniel Atzberger,Tim Cech,Matthias Trapp,Rico Richter,Willy Scheibel,Jürgen Döllner,Tobias Schreck
出处
期刊:IEEE Transactions on Visualization and Computer Graphics
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:: 1-11
被引量:4
标识
DOI:10.1109/tvcg.2023.3326569
摘要
Topic models are a class of unsupervised learning algorithms for detecting the semantic structure within a text corpus. Together with a subsequent dimensionality reduction algorithm, topic models can be used for deriving spatializations for text corpora as two-dimensional scatter plots, reflecting semantic similarity between the documents and supporting corpus analysis. Although the choice of the topic model, the dimensionality reduction, and their underlying hyperparameters significantly impact the resulting layout, it is unknown which particular combinations result in high-quality layouts with respect to accuracy and perception metrics. To investigate the effectiveness of topic models and dimensionality reduction methods for the spatialization of corpora as two-dimensional scatter plots (or basis for landscape-type visualizations), we present a large-scale, benchmark-based computational evaluation. Our evaluation consists of (1) a set of corpora, (2) a set of layout algorithms that are combinations of topic models and dimensionality reductions, and (3) quality metrics for quantifying the resulting layout. The corpora are given as document-term matrices, and each document is assigned to a thematic class. The chosen metrics quantify the preservation of local and global properties and the perceptual effectiveness of the two-dimensional scatter plots. By evaluating the benchmark on a computing cluster, we derived a multivariate dataset with over 45 000 individual layouts and corresponding quality metrics. Based on the results, we propose guidelines for the effective design of text spatializations that are based on topic models and dimensionality reductions. As a main result, we show that interpretable topic models are beneficial for capturing the structure of text corpora. We furthermore recommend the use of t-SNE as a subsequent dimensionality reduction.
科研通智能强力驱动
Strongly Powered by AbleSci AI