计算机科学
运动插值
光流
计算机视觉
插值(计算机图形学)
人工智能
能见度
帧(网络)
运动估计
运动补偿
算法
运动(物理)
块匹配算法
视频跟踪
图像(数学)
视频处理
物理
光学
电信
作者
Huaizu Jiang,Deqing Sun,Varan Jampani,Ming–Hsuan Yang,Erik Learned-Miller,Jan Kautz
标识
DOI:10.1109/cvpr.2018.00938
摘要
Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. To train our network, we use 1,132 240-fps video clips, containing 300K individual video frames. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI