计算机科学
结合属性
先验概率
判别式
生成模型
内容寻址存储器
算法
人工智能
生成语法
深信不疑网络
模式识别(心理学)
人工神经网络
推论
数学
贝叶斯概率
纯数学
作者
Geoffrey E. Hinton,Simon Osindero,Yee-Whye Teh
标识
DOI:10.1162/neco.2006.18.7.1527
摘要
We show how to use “complementary priors” to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.
科研通智能强力驱动
Strongly Powered by AbleSci AI