计算机科学
多样性(控制论)
帧(网络)
人工智能
样品(材料)
先验概率
机器学习
简单(哲学)
计算机视觉
贝叶斯概率
色谱法
电信
化学
哲学
认识论
作者
Emily Denton,Rob Fergus
出处
期刊:Cornell University - arXiv
日期:2018-01-01
被引量:336
标识
DOI:10.48550/arxiv.1802.07687
摘要
Generating video frames that accurately predict future world states is challenging. Existing approaches either fail to capture the full distribution of outcomes, or yield blurry generations, or both. In this paper we introduce an unsupervised video generation model that learns a prior model of uncertainty in a given environment. Video frames are generated by drawing samples from this prior and combining them with a deterministic estimate of the future frame. The approach is simple and easily trained end-to-end on a variety of datasets. Sample generations are both varied and sharp, even many frames into the future, and compare favorably to those from existing approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI