论文标题
通过增强可分离卷积的多个视频框架插值
Multiple Video Frame Interpolation via Enhanced Deformable Separable Convolution
论文作者
论文摘要
在视频处理字段中,从连续的视频序列中生成不存在的帧一直是一个有趣且充满挑战的问题。典型的基于内核的插值方法预测具有单个卷积过程的像素,该过程与空间自适应的局部内核进行了卷积,该核心以光流的形式规避了时间耗时的显式运动估计。但是,当场景运动大于预定义的内核大小时,这些方法很容易产生不太合理的结果。此外,他们不能直接在任意时间位置生成帧,因为学习的内核与输入框架之间的时间点相关。在本文中,我们试图解决这些问题,并提出一种新型的非流核方法,我们将其称为增强的可变形可分离卷积(EDSC),以估计不仅自适应核,而且还估算了自适应核,还可以使网络从非局部社区获得信息。在学习过程中,可以通过扩展坐标-CONV技巧作为控制变量参与不同的中间时间步骤,从而使估计的组件随着不同的输入时间信息而变化。这使我们的方法能够产生多个内部框架。此外,我们研究了我们的方法与其他典型核和基于流的方法之间的关系。实验结果表明,我们的方法对广泛的数据集的最新方法有利。代码将在URL上公开可用:\ url {https://github.com/xianhang/edsc-pytorch}。
Generating non-existing frames from a consecutive video sequence has been an interesting and challenging problem in the video processing field. Typical kernel-based interpolation methods predict pixels with a single convolution process that convolves source frames with spatially adaptive local kernels, which circumvents the time-consuming, explicit motion estimation in the form of optical flow. However, when scene motion is larger than the pre-defined kernel size, these methods are prone to yield less plausible results. In addition, they cannot directly generate a frame at an arbitrary temporal position because the learned kernels are tied to the midpoint in time between the input frames. In this paper, we try to solve these problems and propose a novel non-flow kernel-based approach that we refer to as enhanced deformable separable convolution (EDSC) to estimate not only adaptive kernels, but also offsets, masks and biases to make the network obtain information from non-local neighborhood. During the learning process, different intermediate time step can be involved as a control variable by means of an extension of coord-conv trick, allowing the estimated components to vary with different input temporal information. This makes our method capable to produce multiple in-between frames. Furthermore, we investigate the relationships between our method and other typical kernel- and flow-based methods. Experimental results show that our method performs favorably against the state-of-the-art methods across a broad range of datasets. Code will be publicly available on URL: \url{https://github.com/Xianhang/EDSC-pytorch}.