作者
Junhao Song,Yichao Zhang,Zhuming Bi,Tianyang Wang,Keyu Chen,Ming Li,Qian Niu,Junyu Liu,Benji Peng,Sen Zhang,Ming Liu,Jiawei Xu,Xiaoyong Pan,Jinlang Wang,Peiyong Feng,Yizhu Wen,Lingzhi Yan,H. Eric Tseng,Xinyuan Song,Jin‐Tao Ren,Silin Chen,Yunze Wang,Wilson C. Hsieh,Bowen Jing,Junjie Yang,Jun Zhou,Z P Yao,Chia Xin Liang
摘要
This book begins with a detailed introduction to the fundamental principles and historical development of GANs, contrasting them with traditional generative models and elucidating the core adversarial mechanisms through illustrative Python examples. The text systematically addresses the mathematical and theoretical underpinnings including probability theory, statistics, and game theory providing a solid framework for understanding the objectives, loss functions, and optimisation challenges inherent to GAN training. Subsequent chapters review classic variants such as Conditional GANs, DCGANs, InfoGAN, and LAPGAN before progressing to advanced training methodologies like Wasserstein GANs, GANs with gradient penalty, least squares GANs, and spectral normalisation techniques. The book further examines architectural enhancements and task-specific adaptations in generators and discriminators, showcasing practical implementations in high resolution image generation, artistic style transfer, video synthesis, text to image generation and other multimedia applications. The concluding sections offer insights into emerging research trends, including self-attention mechanisms, transformer-based generative models, and a comparative analysis with diffusion models, thus charting promising directions for future developments in both academic and applied settings.