鉴别器
计算机科学
纳什均衡
最大值和最小值
随机梯度下降算法
趋同(经济学)
水准点(测量)
数学优化
发电机(电路理论)
比例(比率)
人工智能
算法
数学
人工神经网络
功率(物理)
电信
数学分析
物理
大地测量学
量子力学
探测器
地理
经济
经济增长
作者
Martin Heusel,Hubert Ramsauer,Thomas Unterthiner,Bernhard Nessler,Sepp Hochreiter
出处
期刊:Cornell University - arXiv
日期:2017-01-01
卷期号:30: 6626-6637
被引量:3576
摘要
Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the `Frechet Inception Distance'' (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark.
科研通智能强力驱动
Strongly Powered by AbleSci AI