论文标题

具有嘈杂保护属性的公平排名

Fair Ranking with Noisy Protected Attributes

论文作者

Mehrotra, Anay, Vishnoi, Nisheeth K.

论文摘要

公平级别的问题要求对一组给定的项目进行排名,以最大程度地受到群体公平限制的影响,并在公平,信息检索和机器学习文献中受到了关注。然而,最近的著作观察到,项目的错误(包括受保护的)属性可能会严重破坏现有的公平级别算法的公平性保证,并提出减轻此类错误效果的问题。我们研究了一个模型下的公平级别问题,其中项目的社会质量属性是随机和独立扰动的。我们提出了一个公平的框架,该框架结合了群体公平要求以及有关社会空位属性扰动的概率信息。我们为我们的框架可实现的公平性和效用提供可证明的保证,并表明从理论上进行信息是不可能击败这些保证的信息。我们的框架适用于多个非分离属性以及包括比例和平等表示的一般公平约束。从经验上讲,我们观察到,与基准相比,我们的算法输出的排名具有更高的公平性,并且与基线相比,公平性耐用性的权衡相似或更好。

The fair-ranking problem, which asks to rank a given set of items to maximize utility subject to group fairness constraints, has received attention in the fairness, information retrieval, and machine learning literature. Recent works, however, observe that errors in socially-salient (including protected) attributes of items can significantly undermine fairness guarantees of existing fair-ranking algorithms and raise the problem of mitigating the effect of such errors. We study the fair-ranking problem under a model where socially-salient attributes of items are randomly and independently perturbed. We present a fair-ranking framework that incorporates group fairness requirements along with probabilistic information about perturbations in socially-salient attributes. We provide provable guarantees on the fairness and utility attainable by our framework and show that it is information-theoretically impossible to significantly beat these guarantees. Our framework works for multiple non-disjoint attributes and a general class of fairness constraints that includes proportional and equal representation. Empirically, we observe that, compared to baselines, our algorithm outputs rankings with higher fairness, and has a similar or better fairness-utility trade-off compared to baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源