计算机科学
主题模型
推论
理论(学习稳定性)
数据科学
情报检索
人工智能
机器学习
作者
Yi Yang,Ramanath Subramanyam
标识
DOI:10.25300/misq/2022/16957
摘要
Topic models are becoming a frequently employed tool in the empirical methods repertoire of information systems and management scholars. Given textual corpora, such as consumer reviews and online discussion forums, researchers and business practitioners often use topic modeling to either explore data in an unsupervised fashion or generate variables of interest for subsequent econometric analysis. However, one important concern stems from the fact that topic models can be notorious for their instability, i.e., the generated results could be inconsistent and irreproducible at different times, even on the same dataset. Therefore, researchers might arrive at potentially unreliable results regarding the theoretical relationships that they are testing or developing. In this paper, we attempt to highlight this problem and suggest a potential approach to addressing it. First, we empirically define and evaluate the stability problem of topic models using four textual datasets. Next, to alleviate the problem and with the goal of extracting actionable insights from textual data, we propose a new method, Stable LDA, which incorporates topical word clusters into the topic model to steer the model inference toward consistent results. We show that the proposed Stable LDA approach can significantly improve model stability while maintaining or even improving the topic model quality. Further, employing two case studies related to an online knowledge community and online consumer reviews, we demonstrate that the variables generated from Stable LDA can lead to more consistent estimations in econometric analyses. We believe that our work can further enhance management scholars’ collective toolkit to analyze ever-growing textual data.
科研通智能强力驱动
Strongly Powered by AbleSci AI