过度拟合
多线性映射
计算机科学
核(代数)
人工智能
机器学习
张量(固有定义)
概率逻辑
判别式
水准点(测量)
高斯过程
人工神经网络
数学优化
模式识别(心理学)
数学
高斯分布
地理
纯数学
大地测量学
物理
组合数学
量子力学
作者
Conor Tillinghast,Shikai Fang,Kai Zhang,Shandian Zhe
标识
DOI:10.1109/icdm50108.2020.00062
摘要
Tensor decomposition is a fundamental framework to model and analyze multiway data, which are ubiquitous in realworld applications. A critical challenge of tensor decomposition is to capture a variety of complex relationships/interactions while avoiding overfitting the data that are usually very sparse. Although numerous tensor decomposition methods have been proposed, they are mostly based on a multilinear form and hence are incapable of estimating more complex, nonlinear relationships. To address the challenge, we propose POND, PrObabilistic Neural-kernel tensor Decomposition that unifies the self-adaptation of Bayes nonparametric function learning and the expressive power of neural networks. POND uses Gaussian processes (GPs) to model the hidden relationships and can automatically detect their complexity in tensors, preventing both underfitting and overfitting. POND then incorporates convolutional neural networks to construct the GP kernel to greatly promote the capability of estimating highly nonlinear relationships. To scale POND to large data, we use the sparse variational GP framework and reparameterization trick to develop an efficient stochastic variational learning algorithm. On both synthetic and real-world benchmark datasets, POND often exhibits better predictive performance than the state-of-the-art nonlinear tensor decomposition methods. In addition, as a Bayesian approach, POND provides the posterior distribution of the latent factors, and hence can conveniently quantify their uncertainty and the confidence levels for predictions.
科研通智能强力驱动
Strongly Powered by AbleSci AI