论文标题
差异私人和公平的深度学习:拉格朗日双重方法
Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach
论文作者
论文摘要
数据驱动决策的关键问题是建立模型,其结果不会歧视某些人口群体,包括性别,种族或年龄。为了确保学习任务中的不歧视,对敏感属性的了解至关重要,而实际上,由于法律和道德要求,这些属性可能无法获得。为了应对这一挑战,本文研究了一个保护个人敏感信息隐私的模型,同时还允许其学习非歧视性预测指标。该方法依赖于差异隐私的概念以及使用拉格朗日二元性来设计可以适应公平限制的神经网络,同时保证敏感属性的隐私。本文分析了准确性,隐私和公平性和实验评估之间的张力,说明了拟议模型对几个预测任务的好处。
A critical concern in data-driven decision making is to build models whose outcomes do not discriminate against some demographic groups, including gender, ethnicity, or age. To ensure non-discrimination in learning tasks, knowledge of the sensitive attributes is essential, while, in practice, these attributes may not be available due to legal and ethical requirements. To address this challenge, this paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors. The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints while guaranteeing the privacy of sensitive attributes. The paper analyses the tension between accuracy, privacy, and fairness and the experimental evaluation illustrates the benefits of the proposed model on several prediction tasks.