MNIST数据库
采样(信号处理)
噪音(视频)
计算机科学
修补
生成模型
歧管(流体力学)
人工智能
匹配(统计)
高斯分布
图像(数学)
模式识别(心理学)
分布(数学)
生成语法
算法
数学
深度学习
统计
物理
计算机视觉
数学分析
机械工程
工程类
滤波器(信号处理)
量子力学
作者
Yang Song,Stefano Ermon
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:620
标识
DOI:10.48550/arxiv.1907.05600
摘要
We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching. Because gradients can be ill-defined and hard to estimate when the data resides on low-dimensional manifolds, we perturb the data with different levels of Gaussian noise, and jointly estimate the corresponding scores, i.e., the vector fields of gradients of the perturbed data distribution for all noise levels. For sampling, we propose an annealed Langevin dynamics where we use gradients corresponding to gradually decreasing noise levels as the sampling process gets closer to the data manifold. Our framework allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons. Our models produce samples comparable to GANs on MNIST, CelebA and CIFAR-10 datasets, achieving a new state-of-the-art inception score of 8.87 on CIFAR-10. Additionally, we demonstrate that our models learn effective representations via image inpainting experiments.
科研通智能强力驱动
Strongly Powered by AbleSci AI