论文标题

一示视频垫子

One-Trimap Video Matting

论文作者

Seong, Hongje, Oh, Seoung Wug, Price, Brian, Kim, Euntai, Lee, Joon-Young

论文摘要

最近的研究通过将基于Trimap的图像垫子的成功扩展到视频域,在视频垫上取得了长足进步。在本文中,我们将此任务推向了更实用的设置,并提出了仅使用一个用户宣布的Trimap来强稳定执行视频效果的单trimap Video Matting网络(OTVM)。 OTVM的一个关键是Trimap传播和α预测的关节建模。我们的OTVM从基线构架传播和α预测网络开始,将两个网络与alpha-Trimap细化模块结合在一起,以促进信息流。我们还提出了一种端到端的培训策略,以充分利用联合模型。与先前的解耦方法相比,我们的联合建模大大提高了三映射传播的时间稳定性。我们在两个最新的视频垫测试基准,深度视频垫和视频图108上评估了模型,并以大幅度的利润率(分别提高56.4%和56.7%)。源代码和模型可在线获得:https://github.com/hongje/otvm。

Recent studies made great progress in video matting by extending the success of trimap-based image matting to the video domain. In this paper, we push this task toward a more practical setting and propose One-Trimap Video Matting network (OTVM) that performs video matting robustly using only one user-annotated trimap. A key of OTVM is the joint modeling of trimap propagation and alpha prediction. Starting from baseline trimap propagation and alpha prediction networks, our OTVM combines the two networks with an alpha-trimap refinement module to facilitate information flow. We also present an end-to-end training strategy to take full advantage of the joint model. Our joint modeling greatly improves the temporal stability of trimap propagation compared to the previous decoupled methods. We evaluate our model on two latest video matting benchmarks, Deep Video Matting and VideoMatting108, and outperform state-of-the-art by significant margins (MSE improvements of 56.4% and 56.7%, respectively). The source code and model are available online: https://github.com/Hongje/OTVM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源