论文标题

HEAD2HEAD ++:深层面部属性重新定位

Head2Head++: Deep Facial Attributes Re-Targeting

论文作者

Doukas, Michail Christos, Koujan, Mohammad Rami, Sharmanska, Viktoriia, Roussos, Anastasios

论文摘要

面部视频重新定位是一个具有挑战性的问题,旨在通过驱动单眼序列以无缝的方式修改目标对象的面部属性。我们利用面部和生成对抗网络(GAN)的3D几何形状为面部和头部重演的任务设计一种新颖的深度学习体系结构。我们的方法与纯粹的基于3D模型的方法或使用深卷积神经网络(DCNN)生成单个帧的最新基于图像的方法不同。我们借助序列发电机和一个临时动力学歧视器网络,从驱动单眼表演中捕获了复杂的非刚性面部运动,并综合了时间一致的视频。我们进行了一组全面的定量和定性测试,并在实验上证明我们所提出的方法可以成功地将面部表情,头姿势和眼睛从源视频转移到目标主题,以光真实和忠实的方式,比其他最先进的方法更好。最重要的是,我们的系统以几乎实时的速度(18 fps)执行端到端重新制定。

Facial video re-targeting is a challenging problem aiming to modify the facial attributes of a target subject in a seamless manner by a driving monocular sequence. We leverage the 3D geometry of faces and Generative Adversarial Networks (GANs) to design a novel deep learning architecture for the task of facial and head reenactment. Our method is different to purely 3D model-based approaches, or recent image-based methods that use Deep Convolutional Neural Networks (DCNNs) to generate individual frames. We manage to capture the complex non-rigid facial motion from the driving monocular performances and synthesise temporally consistent videos, with the aid of a sequential Generator and an ad-hoc Dynamics Discriminator network. We conduct a comprehensive set of quantitative and qualitative tests and demonstrate experimentally that our proposed method can successfully transfer facial expressions, head pose and eye gaze from a source video to a target subject, in a photo-realistic and faithful fashion, better than other state-of-the-art methods. Most importantly, our system performs end-to-end reenactment in nearly real-time speed (18 fps).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源