生成语法
计算机科学
判别式
极小极大
生成对抗网络
对抗制
人工智能
领域(数学)
深度学习
机器学习
分歧(语言学)
数据科学
理论计算机科学
数学
数学优化
哲学
语言学
纯数学
作者
Tanujit Chakraborty,Ujjwal Reddy K S,Shraddha M. Naik,Madhurima Panja,Bayapureddy Manvitha
标识
DOI:10.1088/2632-2153/ad1f77
摘要
Abstract Generative adversarial networks (GANs) have rapidly emerged as powerful tools for generating realistic and diverse data across various domains, including computer vision and other applied areas, since their inception in 2014. Consisting of a discriminative network and a generative network engaged in a minimax game, GANs have revolutionized the field of generative modeling. In February 2018, GAN secured the leading spot on the ‘Top Ten Global Breakthrough Technologies List’ issued by the Massachusetts Science and Technology Review. Over the years, numerous advancements have been proposed, leading to a rich array of GAN variants, such as conditional GAN, Wasserstein GAN, cycle-consistent GAN, and StyleGAN, among many others. This survey aims to provide a general overview of GANs, summarizing the latent architecture, validation metrics, and application areas of the most widely recognized variants. We also delve into recent theoretical developments, exploring the profound connection between the adversarial principle underlying GAN and Jensen–Shannon divergence while discussing the optimality characteristics of the GAN framework. The efficiency of GAN variants and their model architectures will be evaluated along with training obstacles as well as training solutions. In addition, a detailed discussion will be provided, examining the integration of GANs with newly developed deep learning frameworks such as transformers, physics-informed neural networks, large language models, and diffusion models. Finally, we reveal several issues as well as future research outlines in this field.
科研通智能强力驱动
Strongly Powered by AbleSci AI