论文标题
利用差异隐私的隐私保护面部识别
Privacy Preserving Face Recognition Utilizing Differential Privacy
论文作者
论文摘要
面部识别技术在许多领域实施,包括但不限于公民监视,犯罪控制,活动监测和面部表达评估。但是,处理生物识别信息是一项资源密集型任务,通常涉及第三方服务器,这可以由具有恶意意图的对手来访问。以不受控制的方式传递给未经信任的第三方服务器的生物识别信息可以被认为是显着的隐私泄漏(即,不受控制的信息发布),因为生物识别技术可以与敏感数据(例如医疗保健或财务记录)相关。在本文中,我们为“受控信息发布”提出了一种隐私的技术,我们在其中掩盖了原始的面部图像,并在识别一个人的同时防止生物识别特征的泄漏。我们介绍了使用当地差异隐私的新的名为PEEP(使用特征表扰动的隐私)的新的隐私面部识别协议。 PEEP利用差异隐私将扰动应用于特征面,仅存储第三方服务器中的扰动数据来运行标准的特征表识别算法。结果,受过训练的模型将不容易受到隐私攻击的攻击,例如会员推理和模型记忆攻击。我们的实验表明,在标准隐私设置下,PEEP的分类精度约为70% - 90%。
Facial recognition technologies are implemented in many areas, including but not limited to, citizen surveillance, crime control, activity monitoring, and facial expression evaluation. However, processing biometric information is a resource-intensive task that often involves third-party servers, which can be accessed by adversaries with malicious intent. Biometric information delivered to untrusted third-party servers in an uncontrolled manner can be considered a significant privacy leak (i.e. uncontrolled information release) as biometrics can be correlated with sensitive data such as healthcare or financial records. In this paper, we propose a privacy-preserving technique for "controlled information release", where we disguise an original face image and prevent leakage of the biometric features while identifying a person. We introduce a new privacy-preserving face recognition protocol named PEEP (Privacy using EigEnface Perturbation) that utilizes local differential privacy. PEEP applies perturbation to Eigenfaces utilizing differential privacy and stores only the perturbed data in the third-party servers to run a standard Eigenface recognition algorithm. As a result, the trained model will not be vulnerable to privacy attacks such as membership inference and model memorization attacks. Our experiments show that PEEP exhibits a classification accuracy of around 70% - 90% under standard privacy settings.