自编码
推论
后验概率
计算机科学
加权
人工智能
模式识别(心理学)
生成模型
灵活性(工程)
生成语法
机器学习
算法
人工神经网络
数学
统计
贝叶斯概率
放射科
医学
作者
Yuri Burda,Roger Grosse,Ruslan Salakhutdinov
出处
期刊:International Conference on Learning Representations
日期:2016-01-01
被引量:454
摘要
Abstract: The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It typically makes strong assumptions about posterior inference, for instance that the posterior distribution is approximately factorial, and that its parameters can be approximated with nonlinear regression from the observations. As we show empirically, the VAE objective can lead to overly simplified representations which fail to use the network's entire modeling capacity. We present the importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting. In the IWAE, the recognition network uses multiple samples to approximate the posterior, giving it increased flexibility to model complex posteriors which do not fit the VAE modeling assumptions. We show empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log-likelihood on density estimation benchmarks.
科研通智能强力驱动
Strongly Powered by AbleSci AI