论文标题

执行延迟撞击公平保证

Enforcing Delayed-Impact Fairness Guarantees

论文作者

Weber, Aline, Metevier, Blossom, Brun, Yuriy, Thomas, Philip S., da Silva, Bruno Castro

论文摘要

最近的研究表明,看似公平的机器学习模型在为对人们的生活或福祉产生影响的决策提供信息(例如,涉及教育,就业和贷款的应用程序)可能会长期无意地增加社会不平等。这是因为先前的公平感知算法仅考虑静态公平限制,例如机会均等或人口统计奇偶。但是,强制执行这种类型的限制可能会导致模型对处境不利的个人和社区产生负面影响。我们介绍了Elf(实施长期公平性),这是第一个分类算法,可提供高信心公平的保证,以长期或延迟影响。我们证明,ELF返回不公平解决方案的概率小于用户指定的公差,并且(在轻度假设下),如果有足够的培训数据,ELF能够找到并返回公平的解决方案,如果存在一个公平的解决方案。我们通过实验表明,我们的算法可以成功缓解长期不公平。

Recent research has shown that seemingly fair machine learning models, when used to inform decisions that have an impact on peoples' lives or well-being (e.g., applications involving education, employment, and lending), can inadvertently increase social inequality in the long term. This is because prior fairness-aware algorithms only consider static fairness constraints, such as equal opportunity or demographic parity. However, enforcing constraints of this type may result in models that have negative long-term impact on disadvantaged individuals and communities. We introduce ELF (Enforcing Long-term Fairness), the first classification algorithm that provides high-confidence fairness guarantees in terms of long-term, or delayed, impact. We prove that the probability that ELF returns an unfair solution is less than a user-specified tolerance and that (under mild assumptions), given sufficient training data, ELF is able to find and return a fair solution if one exists. We show experimentally that our algorithm can successfully mitigate long-term unfairness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源