论文标题
OSTEC:一击纹理完成
OSTeC: One-Shot Texture Completion
论文作者
论文摘要
在过去的几年中,非线性生成模型在综合高质量的影像脸图像中取得了巨大的成功。来自单个图像方法的许多最近的3D面部纹理重建和构成操纵仍然依赖大而干净的面部数据集来训练图像到图像生成的对抗网络(GAN)。然而,如此大规模的高分辨率3D纹理数据集的收集仍然非常昂贵且难以维持年龄/民族平衡。此外,基于回归的方法遭受对野外条件的概括,无法对目标形象进行微调。在这项工作中,我们为单发3D面部纹理完成的无监督方法提出了一种不需要大规模纹理数据集的方法,而是利用存储在2D面部发电机中的知识。所提出的方法通过根据可见的部分重建2D面发电机中的旋转图像来旋转3D的输入图像,并填充看不见的区域。最后,我们以紫外线图平面的不同角度缝合最明显的纹理。此外,我们通过将完整的纹理投影到发电机中来对目标图像进行拓平。定性和定量实验表明,完整的紫外线纹理和额叶图像具有高质量,类似于原始身份,可用于训练3DMM拟合的纹理gan模型,并改善姿势不变的面部识别。
The last few years have witnessed the great success of non-linear generative models in synthesizing high-quality photorealistic face images. Many recent 3D facial texture reconstruction and pose manipulation from a single image approaches still rely on large and clean face datasets to train image-to-image Generative Adversarial Networks (GANs). Yet the collection of such a large scale high-resolution 3D texture dataset is still very costly and difficult to maintain age/ethnicity balance. Moreover, regression-based approaches suffer from generalization to the in-the-wild conditions and are unable to fine-tune to a target-image. In this work, we propose an unsupervised approach for one-shot 3D facial texture completion that does not require large-scale texture datasets, but rather harnesses the knowledge stored in 2D face generators. The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator, based on the visible parts. Finally, we stitch the most visible textures at different angles in the UV image-plane. Further, we frontalize the target image by projecting the completed texture into the generator. The qualitative and quantitative experiments demonstrate that the completed UV textures and frontalized images are of high quality, resembles the original identity, can be used to train a texture GAN model for 3DMM fitting and improve pose-invariant face recognition.