非负矩阵分解
矩阵分解
贝叶斯概率
计算机科学
张量(固有定义)
人工智能
模式识别(心理学)
因式分解
概率逻辑
算法
数学
物理
量子力学
特征向量
纯数学
作者
Jesper Løve Hinrich,Søren F. V. Nielsen,Kristoffer H. Madsen,Morten Mørup
标识
DOI:10.1109/mlsp.2018.8516924
摘要
Non-negative matrix and tensor factorization (NMF/NTF) have become important tools for extracting part based representations in data. It is however unclear when an NMF or NTF approach is most suited for data and how reliably the models predict when trained on partially observed data. We presently extend a recently proposed variational Bayesian NMF (VB-NMF) to non-negative tensor factorization (VB-NTF) for partially observed data. This admits bi- and multi-linear structure quantification considering both model prediction and evidence. We evaluate the developed VB-NTF on synthetic and a real dataset of gene expression in the human brain and contrast the performance to VB-NMF and conventional NMF/NTF. We find that the gene expressions are better accounted for by VB-NMF than VB-NTF and that VB-NMF/VB-NTF more robustly handle partially observed data than conventional NMF/NTF. In particular, probabilistic modeling is beneficial when large amounts of data is missing and/or the model order over-specified.
科研通智能强力驱动
Strongly Powered by AbleSci AI