高斯过程
深信不疑网络
计算机科学
边际似然
潜变量
人工智能
高斯分布
梯度下降
推论
算法
深度学习
选型
贝叶斯推理
贝叶斯优化
数据建模
数学优化
数学
贝叶斯概率
人工神经网络
物理
数据库
量子力学
作者
Andreas Damianou,Neil D. Lawrence
出处
期刊:International Conference on Artificial Intelligence and Statistics
日期:2013-04-29
卷期号:: 207-215
被引量:358
摘要
In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.
科研通智能强力驱动
Strongly Powered by AbleSci AI