论文标题

具有内源性行为的公平预测

Fair Prediction with Endogenous Behavior

论文作者

Jung, Christopher, Kannan, Sampath, Lee, Changhwa, Pai, Mallesh M., Roth, Aaron, Vohra, Rakesh

论文摘要

在相应的领域(例如,在刑事司法中)将机器学习算法是否“公平地”对待不同的人口群体,对机器学习算法的兴趣越来越大。但是,有几种提出的公平概念,通常是互不兼容的。以刑事司法为例,我们研究了社会选择监禁规则的模型。不同人口群体的代理商的外部选择有所不同(例如,法律就业机会),并决定是否犯罪。我们表明,跨组的I型和II型错误均与最小化总体犯罪率最小化的目的是一致的。其他流行的公平概念不是。

There is increasing regulatory interest in whether machine learning algorithms deployed in consequential domains (e.g. in criminal justice) treat different demographic groups "fairly." However, there are several proposed notions of fairness, typically mutually incompatible. Using criminal justice as an example, we study a model in which society chooses an incarceration rule. Agents of different demographic groups differ in their outside options (e.g. opportunity for legal employment) and decide whether to commit crimes. We show that equalizing type I and type II errors across groups is consistent with the goal of minimizing the overall crime rate; other popular notions of fairness are not.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源