计算机科学
人工神经网络
人工智能
机器学习
先验概率
高斯过程
灵活性(工程)
航程(航空)
刮擦
随机梯度下降算法
贝叶斯概率
全球定位系统
高斯分布
数学
物理
复合材料
材料科学
操作系统
统计
电信
量子力学
作者
Marta Garnelo,Dan Rosenbaum,Christopher Maddison,Tiago Ramalho,David Saxton,Murray Shanahan,Yee Whye Teh,Danilo Jimenez Rezende,S. M. Ali Eslami
出处
期刊:International Conference on Machine Learning
日期:2018-07-03
卷期号:: 1704-1713
被引量:153
摘要
Deep neural networks excel at function approximation, yet they are typically trained from scratch for each new function. On the other hand, Bayesian methods, such as Gaussian Processes (GPs), exploit prior knowledge to quickly infer the shape of a new function at test time. Yet GPs are computationally expensive, and it can be hard to design appropriate priors. In this paper we propose a family of neural models, Conditional Neural Processes (CNPs), that combine the benefits of both. CNPs are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent. CNPs make accurate predictions after observing only a handful of training data points, yet scale to complex functions and large datasets. We demonstrate the performance and versatility of the approach on a range of canonical machine learning tasks, including regression, classification and image completion.
科研通智能强力驱动
Strongly Powered by AbleSci AI