规范化(社会学)
规范(哲学)
剪裁(形态学)
计算机科学
生成语法
数学
对抗制
人工智能
模式识别(心理学)
算法
政治学
人类学
语言学
哲学
社会学
法学
作者
Changsheng Zhou,Jiangshe Zhang,Junmin Liu
标识
DOI:10.1016/j.knosys.2018.08.004
摘要
Wasserstein generative adversarial networks (Wasserstein GANs, WGAN) improve the performance of GANs significantly by imposing the Lipschitz constraints on the critic, which is implemented by weight clipping. In this work, we argue that weight clipping could result in a side effect called area collapse by modifying orientations of weights heavily. To fix this issue, a novel method called Lp-WGAN is presented, where lp-norm normalization is employed to impose the constraints. This method restricts the searching space of weights within a low-dimensional manifold and focuses on searching orientations of weights. Experiments on toy datasets show that Lp-WGAN could spread probability mass and find the underlying distribution earlier than WGAN with weight clipping. Results on the LSUN bedroom dataset and CIFAR-10 dataset show that the proposed method could stabilize training better, generate competitive images earlier and get higher evaluation scores.
科研通智能强力驱动
Strongly Powered by AbleSci AI