论文标题
自适应紧凑的注意力,以进行几次视频与视频翻译
Adaptive Compact Attention For Few-shot Video-to-video Translation
论文作者
论文摘要
本文提出了一个自适应紧凑的注意模型,用于几次视频与视频翻译。该域中的现有作品仅使用Pixel Wise的功能,而无需考虑多个参考图像之间的相关性,从而导致重大计算但性能有限。因此,我们引入了一种新颖的自适应紧凑型注意机制,以从多个参考图像共同提取上下文特征,其中编码依赖于观点的信息和运动依赖性信息可以显着有益于现实视频的综合。我们的核心思想是从所有参考图像中提取紧凑的基础集作为高级表示。为了进一步提高可靠性,在推理阶段,我们还提出了一种基于Delaunay三角算法的新方法,以根据输入标签自动选择足智多谋的参考。我们在大规模的说话头视频数据集和人类跳舞数据集上广泛评估了我们的方法;实验结果表明,我们的方法在产生了逼真的和时间一致的视频,以及对最新方法的大量改进。
This paper proposes an adaptive compact attention model for few-shot video-to-video translation. Existing works in this domain only use features from pixel-wise attention without considering the correlations among multiple reference images, which leads to heavy computation but limited performance. Therefore, we introduce a novel adaptive compact attention mechanism to efficiently extract contextual features jointly from multiple reference images, of which encoded view-dependent and motion-dependent information can significantly benefit the synthesis of realistic videos. Our core idea is to extract compact basis sets from all the reference images as higher-level representations. To further improve the reliability, in the inference phase, we also propose a novel method based on the Delaunay Triangulation algorithm to automatically select the resourceful references according to the input label. We extensively evaluate our method on a large-scale talking-head video dataset and a human dancing dataset; the experimental results show the superior performance of our method for producing photorealistic and temporally consistent videos, and considerable improvements over the state-of-the-art method.