计算机科学
弹道
自回归模型
扩散
运动(物理)
人工智能
计算机视觉
背景(考古学)
光学(聚焦)
RGB颜色模型
光流
运动估计
数学
图像(数学)
古生物学
物理
天文
生物
光学
计量经济学
热力学
作者
Zijun Deng,Xiangteng He,Yuxin Peng,Xingxing Zhu,Lele Cheng
标识
DOI:10.1145/3581783.3612405
摘要
In this paper, we present a Motion-aware Video Diffusion Model (MV-Diffusion) for enhancing the temporal consistency of generated videos using autoregressive diffusion models. Despite the success of diffusion models in various vision generation tasks, generating high-quality and realistic videos with coherent temporal structure remains a challenging problem. Current methods have primarily focused on capturing implicit motion features within a restricted window of RGB frames, rather than explicitly modeling the motion. To address this, we focus on improving the temporal modeling ability of the current autoregressive video diffusion approach by leveraging rich temporal trajectory information in a global context and explicitly modeling local motion trends. The main contributions of this research include: (1) a Trajectory Modeling (TM) block that enhances the model's conditioning by incorporating global motion trajectory information, (2) a Motion Trend Attention (MTA) block that utilizes a cross-attention mechanism to explicitly infer motion trends from the optical flow rather than implicitly learning from RGB input. Experimental results on three video generation tasks using four datasets show the effectiveness of our proposed MV-Diffusion, outperforming existing state-of-the-art approaches. The code is available at https://github.com/PKU-ICST-MIPL/MV-Diffusion_ACMMM2023.
科研通智能强力驱动
Strongly Powered by AbleSci AI