计算机科学
潜在Dirichlet分配
推论
人工智能
伯努利原理
Dirichlet分布
机器学习
简单(哲学)
先验概率
吉布斯抽样
主题模型
词(群论)
模式识别(心理学)
贝叶斯概率
数学
数学分析
哲学
航空航天工程
工程类
认识论
边值问题
几何学
作者
Zhiwen Luo,Manar Amayri,Wentao Fan,Nizar Bouguila
标识
DOI:10.1007/978-3-031-36819-6_4
摘要
We propose a novel model, selective supervised Latent Beta-Liouville (ssLBLA), that improves the performance and generative process of supervised probabilistic topic models with a more flexible prior and simple framework. ssLBLA model utilizes the “bag-of-selective-words” instead of the “bag-of-words” in the topic modeling by using a Bernoulli distribution to identify the discrimination power of a word for its assigned topic. Indeed, ssLBLA improves and inherits the general framework of selective supervised Latent Dirichlet Allocation (ssLDA) and can predict many types of responses. This paper presents a simple framework that utilizes the collapsed Gibbs sampling inference technique coupled with the flexible Beta-Liouville (BL) distribution prior to achieve more accurate estimations. Experimental results in single-label document classification show the merits of our new approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI