论文标题
基于物理的面部效果
Physically-Based Face Rendering for NIR-VIS Face Recognition
论文作者
论文摘要
由于显着的域间隙以及缺乏足够的跨模型模型训练数据,因此近红外(NIR)到可见的(VIS)面部匹配是具有挑战性的。为了克服这个问题,我们提出了一种新型的NIR-VIS面部图像生成的方法。具体而言,我们从大型2D面部数据集中重建3D面向形状和反射率,并引入了一种将VIS反射率转化为NIR反射率的新方法。然后,我们使用基于物理的渲染器来生成由NIR和VIS光谱中的各种姿势和身份组成的庞大,高分辨率和逼真的数据集。此外,为了促进身份特征学习,我们提出了一个基于身份的最大平均差异(ID-MMD)损失,这不仅减少了NIR和VIS图像在域级别上的模态差距,还鼓励网络专注于身份特征而不是面部详细信息,例如姿势和配件。对四个具有挑战性的NIR-VIS面部识别基准进行的广泛实验表明,所提出的方法可以通过最先进的方法(SOTA)方法实现可比性的性能,而无需任何现有的NIR-VIS面部识别数据集。通过对目标NIR-VIS面部识别数据集进行稍微微调,我们的方法可以显着超过SOTA性能。代码和预告片模型以Insightface(https://github.com/deepinsight/insightface/tree/master/master/recognition)发行。
Near infrared (NIR) to Visible (VIS) face matching is challenging due to the significant domain gaps as well as a lack of sufficient data for cross-modality model training. To overcome this problem, we propose a novel method for paired NIR-VIS facial image generation. Specifically, we reconstruct 3D face shape and reflectance from a large 2D facial dataset and introduce a novel method of transforming the VIS reflectance to NIR reflectance. We then use a physically-based renderer to generate a vast, high-resolution and photorealistic dataset consisting of various poses and identities in the NIR and VIS spectra. Moreover, to facilitate the identity feature learning, we propose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss, which not only reduces the modality gap between NIR and VIS images at the domain level but encourages the network to focus on the identity features instead of facial details, such as poses and accessories. Extensive experiments conducted on four challenging NIR-VIS face recognition benchmarks demonstrate that the proposed method can achieve comparable performance with the state-of-the-art (SOTA) methods without requiring any existing NIR-VIS face recognition datasets. With slightly fine-tuning on the target NIR-VIS face recognition datasets, our method can significantly surpass the SOTA performance. Code and pretrained models are released under the insightface (https://github.com/deepinsight/insightface/tree/master/recognition).