计算机科学
扩散
生成语法
生成模型
人工智能
班级(哲学)
过程(计算)
图像(数学)
扩散过程
相变
机器学习
比例(比率)
统计物理学
理论计算机科学
算法
物理
知识管理
创新扩散
量子力学
热力学
操作系统
作者
Antonio Sclocchi,Alessandro Favero,Matthieu Wyart
标识
DOI:10.1073/pnas.2408799121
摘要
Understanding the structure of real data is paramount in advancing modern deep-learning methodologies. Natural data such as images are believed to be composed of features organized in a hierarchical and combinatorial manner, which neural networks capture during learning. Recent advancements show that diffusion models can generate high-quality images, hinting at their ability to capture this underlying compositional structure. We study this phenomenon in a hierarchical generative model of data. We find that the backward diffusion process acting after a time t is governed by a phase transition at some threshold time, where the probability of reconstructing high-level features, like the class of an image, suddenly drops. Instead, the reconstruction of low-level features, such as specific details of an image, evolves smoothly across the whole diffusion process. This result implies that at times beyond the transition, the class has changed, but the generated sample may still be composed of low-level elements of the initial image. We validate these theoretical insights through numerical experiments on class-unconditional ImageNet diffusion models. Our analysis characterizes the relationship between time and scale in diffusion models and puts forward generative models as powerful tools to model combinatorial data properties.
科研通智能强力驱动
Strongly Powered by AbleSci AI