论文标题

通过深层多尺度组件词典修复盲面修复

Blind Face Restoration via Deep Multi-scale Component Dictionaries

论文作者

Li, Xiaoming, Chen, Chaofeng, Zhou, Shangchen, Lin, Xianhui, Zuo, Wangmeng, Zhang, Lei

论文摘要

最近的基于参考的面部修复方法因其在恢复实际低质量图像的高频细节方面的出色能力而受到了极大的关注。但是,这些方法中的大多数都需要相同身份的高质量参考图像,这使得它们仅适用于有限的场景。为了解决这个问题,本文建议一个深面词典网络(称为dfdnet),以指导降级观测值的恢复过程。首先,我们使用K-均值来生成深层词典,以通过高质量的图像从感知上重要的面部成分(\ ie,左/右眼,鼻子和嘴巴)生成。接下来,在降级输入的情况下,我们从其相应词典中匹配并选择最相似的组件特征,并通过建议的词典特征传输(DFT)块将高质量的详细信息传输到输入。特别是,要利用组件adain来消除输入和字典特征之间的样式多样性(\ eg,照明),并提出了置信度评分以适应输入的字典融合。最后,以渐进的方式采用了多尺度词典来使粗到精细的恢复。实验表明,我们所提出的方法可以在定量和定性评估中实现合理的性能,更重要的是,可以在实际退化的图像上产生现实和有希望的结果,而无需进行身份认同。源代码和模型可在\ url {https://github.com/csxmli2016/dfdnet}中获得。

Recent reference-based face restoration methods have received considerable attention due to their great capability in recovering high-frequency details on real low-quality images. However, most of these methods require a high-quality reference image of the same identity, making them only applicable in limited scenes. To address this issue, this paper suggests a deep face dictionary network (termed as DFDNet) to guide the restoration process of degraded observations. To begin with, we use K-means to generate deep dictionaries for perceptually significant face components (\ie, left/right eyes, nose and mouth) from high-quality images. Next, with the degraded input, we match and select the most similar component features from their corresponding dictionaries and transfer the high-quality details to the input via the proposed dictionary feature transfer (DFT) block. In particular, component AdaIN is leveraged to eliminate the style diversity between the input and dictionary features (\eg, illumination), and a confidence score is proposed to adaptively fuse the dictionary feature to the input. Finally, multi-scale dictionaries are adopted in a progressive manner to enable the coarse-to-fine restoration. Experiments show that our proposed method can achieve plausible performance in both quantitative and qualitative evaluation, and more importantly, can generate realistic and promising results on real degraded images without requiring an identity-belonging reference. The source code and models are available at \url{https://github.com/csxmli2016/DFDNet}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源