论文标题
复制和粘贴甘:阴影缩略图的面部幻觉
Copy and Paste GAN: Face Hallucination from Shaded Thumbnails
论文作者
论文摘要
基于卷积神经网络(CNN)的现有面部幻觉方法在正常的照明条件下在低分辨率(LR)面上取得了令人印象深刻的表现。但是,当LR面部在低或不均匀的照明条件下捕获LR面时,它们的性能会大大降低。本文提出了一个复制和粘贴生成对抗网络(CPGAN),以恢复真实的高分辨率(HR)面部图像,同时补偿低和不均匀的照明。为此,我们在CPGAN中开发了两个关键组件:内部和外部副本和粘贴网(CPNET)。具体而言,我们的内部CPNET利用了位于输入图像中的面部信息,以增强面部细节;而我们的外部CPNET则利用外部人力资源面对照明补偿。因此,开发出了新的照明补偿损失,以有效地从外部引导的面部图像中捕获照明。此外,我们的方法以粗到精细的方式交替地抵消了照明和样本的面部细节,从而减轻了LR输入和外部HR输入之间的歧义。广泛的实验表明,我们的方法在统一的照明条件下表现出真实的人力资源面部图像,并且在定性和定量上都优于最先进的方法。
Existing face hallucination methods based on convolutional neural networks (CNN) have achieved impressive performance on low-resolution (LR) faces in a normal illumination condition. However, their performance degrades dramatically when LR faces are captured in low or non-uniform illumination conditions. This paper proposes a Copy and Paste Generative Adversarial Network (CPGAN) to recover authentic high-resolution (HR) face images while compensating for low and non-uniform illumination. To this end, we develop two key components in our CPGAN: internal and external Copy and Paste nets (CPnets). Specifically, our internal CPnet exploits facial information residing in the input image to enhance facial details; while our external CPnet leverages an external HR face for illumination compensation. A new illumination compensation loss is thus developed to capture illumination from the external guided face image effectively. Furthermore, our method offsets illumination and upsamples facial details alternately in a coarse-to-fine fashion, thus alleviating the correspondence ambiguity between LR inputs and external HR inputs. Extensive experiments demonstrate that our method manifests authentic HR face images in a uniform illumination condition and outperforms state-of-the-art methods qualitatively and quantitatively.