鉴别器
计算机科学
杠杆(统计)
蒸馏
利用
发电机(电路理论)
人工智能
领域知识
生成语法
机器学习
忠诚
电信
功率(物理)
化学
物理
计算机安全
有机化学
量子力学
探测器
作者
Yuesong Tian,Li Shen,Xiang Tian,Zhifeng Li,Yaowu Chen
标识
DOI:10.1117/1.jei.33.1.013005
摘要
Generative adversarial networks have shown remarkable success in image synthesis, especially StyleGANs. Equipped with delicate and specific designs, StyleGANs are capable of synthesizing high-resolution and high-fidelity images. Previous works aiming at improving StyleGANs mainly focus on modifying the architecture of StyleGANs or transferring knowledge from other domains. However, the knowledge contained in StyleGANs trained in the same domain is still unexplored. We aim to further boost the performance of StyleGANs from the perspective of knowledge distillation, i.e., improving uncompressed StyleGANs with the aid of teacher StyleGANs trained in the same domain. Motivated by the implicit distribution contained in the pretrained teacher discriminator, we propose to exploit the teacher discriminator to additionally supervise the student generator of StyleGANs so as to leverage the knowledge in the teacher discriminator. With the proposed distillation scheme, our method can outperform original StyleGANs on several large-scale datasets, achieving state-of-the-art on AFHQv2.
科研通智能强力驱动
Strongly Powered by AbleSci AI