论文标题
无约束面部识别的可控和引导的面部合成
Controllable and Guided Face Synthesis for Unconstrained Face Recognition
论文作者
论文摘要
尽管在面部识别方面取得了重大进展(FR),但由于半约束训练数据集和无约束的测试方案之间的域间隙,在不受约束的环境中FR仍然具有挑战性。为了解决此问题,我们提出了一个可控的面部合成模型(CFSM),该模型可以模仿样式潜在空间中目标数据集的分布。 CFSM在样式潜在空间中以正交基础的形式学习了线性子空间,并对合成的多样性和程度进行了精确控制。此外,可以通过FR模型来指导预训练的合成模型,从而使所得图像对FR模型训练更有益。此外,目标数据集分布的特征是学习的正交碱基,可以用来测量面部数据集之间的分布相似性。我们的方法在不受约束的基准测试中获得了显着的性能提高,例如IJB-B,IJB-C,TinyFace和IJB-S(+5.76%rank1)。
Although significant advances have been made in face recognition (FR), FR in unconstrained environments remains challenging due to the domain gap between the semi-constrained training datasets and unconstrained testing scenarios. To address this problem, we propose a controllable face synthesis model (CFSM) that can mimic the distribution of target datasets in a style latent space. CFSM learns a linear subspace with orthogonal bases in the style latent space with precise control over the diversity and degree of synthesis. Furthermore, the pre-trained synthesis model can be guided by the FR model, making the resulting images more beneficial for FR model training. Besides, target dataset distributions are characterized by the learned orthogonal bases, which can be utilized to measure the distributional similarity among face datasets. Our approach yields significant performance gains on unconstrained benchmarks, such as IJB-B, IJB-C, TinyFace and IJB-S (+5.76% Rank1).