计算机科学
变压器
建筑
高保真
人工智能
忠诚
简单
生成语法
生成模型
计算机视觉
模式识别(心理学)
工程类
电信
电气工程
哲学
艺术
视觉艺术
认识论
电压
作者
Wilson Yan,Yunzhi Zhang,Pieter Abbeel,Aravind Srinivas
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:126
标识
DOI:10.48550/arxiv.2104.10157
摘要
We present VideoGPT: a conceptually simple architecture for scaling likelihood based generative modeling to natural videos. VideoGPT uses VQ-VAE that learns downsampled discrete latent representations of a raw video by employing 3D convolutions and axial self-attention. A simple GPT-like architecture is then used to autoregressively model the discrete latents using spatio-temporal position encodings. Despite the simplicity in formulation and ease of training, our architecture is able to generate samples competitive with state-of-the-art GAN models for video generation on the BAIR Robot dataset, and generate high fidelity natural videos from UCF-101 and Tumbler GIF Dataset (TGIF). We hope our proposed architecture serves as a reproducible reference for a minimalistic implementation of transformer based video generation models. Samples and code are available at https://wilson1yan.github.io/videogpt/index.html
科研通智能强力驱动
Strongly Powered by AbleSci AI