论文标题

Stylerig:用于3D控制肖像图像的索具样式

StyleRig: Rigging StyleGAN for 3D Control over Portrait Images

论文作者

Tewari, Ayush, Elgharib, Mohamed, Bharaj, Gaurav, Bernard, Florian, Seidel, Hans-Peter, Pérez, Patrick, Zollhöfer, Michael, Theobalt, Christian

论文摘要

Stylegan生成了具有眼睛,牙齿,头发和上下文(脖子,肩膀,背景)的面孔的影像图像,但缺乏对语义面部参数的类似钻机的控制,这些参数可解释为3D,例如脸部姿势,表情和场景照明。另一方面,三维形式的面部模型(3DMM)提供了对语义参数的控制,但是当渲染时缺乏光真相,仅对脸部内部进行建模,而不是肖像图像的其他部分(头发,嘴巴内部,背景)。我们提出了第一种通过3DMM对经过预告片和固定样式的面部式控制的方法。 Rignet是一个新的索具网络,在3DMM的语义参数和Stylegan的输入之间进行了训练。该网络以自我监督的方式进行培训,而无需手动注释。在测试时,我们的方法以stylegan的光真实性生成肖像图像,并对面部的3D语义参数提供明确的控制。

StyleGAN generates photorealistic portrait images of faces with eyes, teeth, hair and context (neck, shoulders, background), but lacks a rig-like control over semantic face parameters that are interpretable in 3D, such as face pose, expressions, and scene illumination. Three-dimensional morphable face models (3DMMs) on the other hand offer control over the semantic parameters, but lack photorealism when rendered and only model the face interior, not other parts of a portrait image (hair, mouth interior, background). We present the first method to provide a face rig-like control over a pretrained and fixed StyleGAN via a 3DMM. A new rigging network, RigNet is trained between the 3DMM's semantic parameters and StyleGAN's input. The network is trained in a self-supervised manner, without the need for manual annotations. At test time, our method generates portrait images with the photorealism of StyleGAN and provides explicit control over the 3D semantic parameters of the face.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源