计算机科学
忠诚
重要事件
任务(项目管理)
高保真
扩散
视频跟踪
扩展(谓词逻辑)
生成模型
差异(会计)
人工智能
计算机视觉
视频处理
生成语法
电信
物理
业务
管理
考古
会计
电气工程
经济
历史
程序设计语言
工程类
热力学
作者
Jonathan Ho,Tim Salimans,Alexey A. Gritsenko,William Chan,Mohammad Norouzi,David J. Fleet
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:257
标识
DOI:10.48550/arxiv.2204.03458
摘要
Generating temporally coherent high fidelity video is an important milestone in generative modeling research. We make progress towards this milestone by proposing a diffusion model for video generation that shows very promising initial results. Our model is a natural extension of the standard image diffusion architecture, and it enables jointly training from image and video data, which we find to reduce the variance of minibatch gradients and speed up optimization. To generate long and higher resolution videos we introduce a new conditional sampling technique for spatial and temporal video extension that performs better than previously proposed methods. We present the first results on a large text-conditioned video generation task, as well as state-of-the-art results on established benchmarks for video prediction and unconditional video generation. Supplementary material is available at https://video-diffusion.github.io/
科研通智能强力驱动
Strongly Powered by AbleSci AI