论文标题
视频演示具有基于关系的时间一致性
Video Demoireing with Relation-Based Temporal Consistency
论文作者
论文摘要
Moire图案在用数码相机拍摄屏幕时,以颜色扭曲形式出现,严重降低图像和视频质量。考虑到捕获视频的需求不断增长,我们研究了如何在视频中删除这种不良的Moire模式,即视频演示。为此,我们介绍了使用专用数据收集管道的第一个手持视频演示数据集,以确保捕获的数据的空间和时间对齐。此外,开发了具有隐式特征空间对齐和选择性功能聚合的基线视频演示模型,以利用附近帧的互补信息来改善帧级视频演示。更重要的是,我们提出了基于关系的时间一致性损失,以鼓励模型直接从地面真正的参考视频中学习时间一致性先验,该视频有助于产生时间一致的预测并有效地保持框架级别的质量。广泛的实验表明了我们模型的优越性。代码可在\ url {https://daipengwa.github.io/vdmoire_projectpage/}中获得。
Moire patterns, appearing as color distortions, severely degrade image and video qualities when filming a screen with digital cameras. Considering the increasing demands for capturing videos, we study how to remove such undesirable moire patterns in videos, namely video demoireing. To this end, we introduce the first hand-held video demoireing dataset with a dedicated data collection pipeline to ensure spatial and temporal alignments of captured data. Further, a baseline video demoireing model with implicit feature space alignment and selective feature aggregation is developed to leverage complementary information from nearby frames to improve frame-level video demoireing. More importantly, we propose a relation-based temporal consistency loss to encourage the model to learn temporal consistency priors directly from ground-truth reference videos, which facilitates producing temporally consistent predictions and effectively maintains frame-level qualities. Extensive experiments manifest the superiority of our model. Code is available at \url{https://daipengwa.github.io/VDmoire_ProjectPage/}.