计算机科学
人工智能
编码器
计算机视觉
运动(物理)
动作(物理)
信号(编程语言)
运动估计
运动控制
任务(项目管理)
集合(抽象数据类型)
机械臂
机器人
简单(哲学)
推论
哲学
物理
管理
认识论
量子力学
经济
程序设计语言
操作系统
作者
Ali Köksal,Kenan E. Ak,Ying Sun,Deepu Rajan,Joo‐Hwee Lim
标识
DOI:10.1109/tmm.2023.3262972
摘要
Most of the existing studies on controllable video generation either transfer disentangled motion to an appearance without detailed control over motion or generate videos of simple actions such as the movement of arbitrary objects conditioned on a control signal from users. In this study, we introduce Controllable Video Generation with text-based Instructions (CVGI) framework that allows text-based control over action performed on a video. CVGI generates videos where hands interact with objects to perform the desired action by generating hand motions with detailed control through text-based instruction from users. By incorporating the motion estimation layer, we divide the task into two sub-tasks: (1) control signal estimation and (2) action generation. In control signal estimation, an encoder models actions as a set of simple motions by estimating low-level control signals for text-based instructions with given initial frames. In action generation, generative adversarial networks (GANs) generate realistic hand-based action videos as a combination of hand motions conditioned on the estimated low control level signal. Evaluations on several datasets (EPIC-Kitchens-55, BAIR robot pushing, and Atari Breakout) show the effectiveness of CVGI in generating realistic videos and in the control over actions.
科研通智能强力驱动
Strongly Powered by AbleSci AI