论文标题
视听面部的重演
Audio-Visual Face Reenactment
论文作者
论文摘要
这项工作提出了一种新颖的方法,可以使用音频和视觉流生成逼真的说话头视频。我们通过使用使用可学习的关键产生的密集运动场从驾驶视频中传输头部运动来使源图像动画。我们使用音频作为额外的输入来提高唇部同步的质量,从而帮助网络参加嘴巴区域。我们使用面部分割和面部网格使用其他先验来改善重建面部的结构。最后,我们通过合并精心设计的身份感知发电机模块来提高世代的视觉质量。身份感知的发电机采用源图像和扭曲的运动功能作为输入,以生成具有细粒细节的高质量输出。我们的方法可产生最新的结果,并概括地看不见面孔,语言和声音。我们使用多个指标全面评估了我们的方法,并超过了定性和定量的当前技术。我们的工作打开了多个应用程序,包括启用低带宽视频通话。我们在http://cvit.iiit.ac.in/research/project/projects/cvit-projects/avfr上发布了一个演示视频和其他信息。
This work proposes a novel method to generate realistic talking head videos using audio and visual streams. We animate a source image by transferring head motion from a driving video using a dense motion field generated using learnable keypoints. We improve the quality of lip sync using audio as an additional input, helping the network to attend to the mouth region. We use additional priors using face segmentation and face mesh to improve the structure of the reconstructed faces. Finally, we improve the visual quality of the generations by incorporating a carefully designed identity-aware generator module. The identity-aware generator takes the source image and the warped motion features as input to generate a high-quality output with fine-grained details. Our method produces state-of-the-art results and generalizes well to unseen faces, languages, and voices. We comprehensively evaluate our approach using multiple metrics and outperforming the current techniques both qualitative and quantitatively. Our work opens up several applications, including enabling low bandwidth video calls. We release a demo video and additional information at http://cvit.iiit.ac.in/research/projects/cvit-projects/avfr.