推论
计算机科学
生成语法
人工智能
机器学习
灵活性(工程)
解码方法
生成模型
潜变量
算法
数学
统计
作者
Shengjia Zhao,Jiaming Song,Stefano Ermon
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2019-07-17
卷期号:33 (01): 5885-5892
被引量:212
标识
DOI:10.1609/aaai.v33i01.33015885
摘要
A key advance in learning generative models is the use of amortized inference distributions that are jointly trained with the models. We find that existing training objectives for variational autoencoders can lead to inaccurate amortized inference distributions and, in some cases, improving the objective provably degrades the inference quality. In addition, it has been observed that variational autoencoders tend to ignore the latent variables when combined with a decoding distribution that is too flexible. We again identify the cause in existing training criteria and propose a new class of objectives (Info-VAE) that mitigate these problems. We show that our model can significantly improve the quality of the variational posterior and can make effective use of the latent features regardless of the flexibility of the decoding distribution. Through extensive qualitative and quantitative analyses, we demonstrate that our models outperform competing approaches on multiple performance metrics
科研通智能强力驱动
Strongly Powered by AbleSci AI