论文标题

通过嘈杂的受保护组的公平性优化公平

Robust Optimization for Fairness with Noisy Protected Groups

论文作者

Wang, Serena, Guo, Wenshuo, Narasimhan, Harikrishna, Cotter, Andrew, Gupta, Maya, Jordan, Michael I.

论文摘要

机器学习的许多现有公平标准涉及在种族或性别等受保护群体之间平衡一些度量。但是,试图审核或执行此类基于群体的标准的从业人员可以轻松地面对嘈杂或有偏见的受保护小组信息的问题。首先,我们研究天真依赖嘈杂的保护组标签的后果:当满足嘈杂的组$ \ hat $ \ hat {g} $时,我们就对真实组的公平违规行为提供了上限。其次,我们使用强大的优化介绍了两种新方法,这些方法与仅依靠$ \ hat {g} $的天真方法不同,可以保证满足真正受保护的组G的公平标准,同时最大程度地减少培训目标。我们提供了理论上的保证,即这样一种方法会融合到最佳的可行解决方案。使用两个案例研究,我们从经验上表明,强大的方法比幼稚的方法获得了更好的真实群体公平保证。

Many existing fairness criteria for machine learning involve equalizing some metric across protected groups such as race or gender. However, practitioners trying to audit or enforce such group-based criteria can easily face the problem of noisy or biased protected group information. First, we study the consequences of naively relying on noisy protected group labels: we provide an upper bound on the fairness violations on the true groups G when the fairness criteria are satisfied on noisy groups $\hat{G}$. Second, we introduce two new approaches using robust optimization that, unlike the naive approach of only relying on $\hat{G}$, are guaranteed to satisfy fairness criteria on the true protected groups G while minimizing a training objective. We provide theoretical guarantees that one such approach converges to an optimal feasible solution. Using two case studies, we show empirically that the robust approaches achieve better true group fairness guarantees than the naive approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源