论文标题
流动引导视频介绍的错误补偿框架
Error Compensation Framework for Flow-Guided Video Inpainting
论文作者
论文摘要
视频介绍的关键是使用尽可能多的参考帧中的相关信息。现有的基于流动的传播方法将视频合成过程分为多个步骤:流程完成 - >像素传播 - >综合。但是,存在一个很大的缺点,即每个步骤中的错误继续在下一步中累积和放大。为此,我们为流提供的视频介绍(ECFVI)提出了一个错误补偿框架,该框架利用基于流的方法并抵消其弱点。我们通过新设计的流程完成模块和利用错误指南图的错误补偿网络来解决弱点。我们的方法大大提高了时间的一致性和完整视频的视觉质量。实验结果表明,与最先进的方法相比,我们提出的方法的出色性能随X6的速度提高了。此外,我们通过补充现有测试数据集的弱点来提出一个新的基准数据集进行评估。
The key to video inpainting is to use correlation information from as many reference frames as possible. Existing flow-based propagation methods split the video synthesis process into multiple steps: flow completion -> pixel propagation -> synthesis. However, there is a significant drawback that the errors in each step continue to accumulate and amplify in the next step. To this end, we propose an Error Compensation Framework for Flow-guided Video Inpainting (ECFVI), which takes advantage of the flow-based method and offsets its weaknesses. We address the weakness with the newly designed flow completion module and the error compensation network that exploits the error guidance map. Our approach greatly improves the temporal consistency and the visual quality of the completed videos. Experimental results show the superior performance of our proposed method with the speed up of x6, compared to the state-of-the-art methods. In addition, we present a new benchmark dataset for evaluation by supplementing the weaknesses of existing test datasets.