论文标题

Quantface:通过合成数据低位量化迈向轻质的面部识别

QuantFace: Towards Lightweight Face Recognition by Synthetic Data Low-bit Quantization

论文作者

Boutros, Fadi, Damer, Naser, Kuijper, Arjan

论文摘要

深度学习的面部识别模型通过利用具有高计算成本的完整精确浮点网络来遵循深度神经网络的共同趋势。由于完整的模型所需的大量内存,将这些网络部署在受计算需求约束的用例中通常是不可行的。以前的紧凑型面部识别方法提议设计特殊的紧凑型建筑并使用真实的培训数据从头开始训练它们,由于隐私问题,在现实世界中可能无法使用。我们在这项工作中介绍了基于低位精度格式模型量化的定量解决方案。 Quantface降低了现有面部识别模型所需的计算成本,而无需设计特定的体系结构或访问真实的培训数据。 Quantface将隐私友好的合成面数据引入量化过程中,以减轻潜在的隐私问题和与真实培训数据有关的问题。通过对七个基准和四个网络体系结构进行的广泛评估实验,我们证明了Quantface可以成功地将模型尺寸成功降低到5倍,同时在很大程度上维护完整模型的验证性能而无需访问真实的培训数据集。

Deep learning-based face recognition models follow the common trend in deep neural networks by utilizing full-precision floating-point networks with high computational costs. Deploying such networks in use-cases constrained by computational requirements is often infeasible due to the large memory required by the full-precision model. Previous compact face recognition approaches proposed to design special compact architectures and train them from scratch using real training data, which may not be available in a real-world scenario due to privacy concerns. We present in this work the QuantFace solution based on low-bit precision format model quantization. QuantFace reduces the required computational cost of the existing face recognition models without the need for designing a particular architecture or accessing real training data. QuantFace introduces privacy-friendly synthetic face data to the quantization process to mitigate potential privacy concerns and issues related to the accessibility to real training data. Through extensive evaluation experiments on seven benchmarks and four network architectures, we demonstrate that QuantFace can successfully reduce the model size up to 5x while maintaining, to a large degree, the verification performance of the full-precision model without accessing real training datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源