论文标题
衡量机器学习中的正义
Measuring justice in machine learning
论文作者
论文摘要
我们如何构建更多仅仅是机器学习系统?要回答这个问题,我们需要知道什么是正义,以及如何判断一个系统是否比另一个系统更公正。也就是说,我们既需要定义和正义量度。分配正义的理论认为,可以(部分地)根据社会中人民的福利和负担的公平分配来衡量正义。最近,被称为公平机器学习的领域转向了约翰·罗尔斯(John Rawls)的分配正义理论,以实现灵感和操作。然而,被称为能力理论家的哲学家长期以来一直认为,罗尔斯的理论使用了错误的正义衡量标准,从而编码了针对残疾人的偏见。如果这些理论家是正确的,是否可以在机器学习系统中运行Rawls的理论而不编码其偏见?在本文中,我借鉴了Fair Machine学习中的示例,以暗示该问题的答案是否:能力理论家反对Rawls理论的论点将其转移到机器学习系统中。但是能力理论家不仅认为罗尔斯的理论使用了错误的措施,而且还提供了另一种措施。哪种正义措施是正确的?公平的机器学习是否正在使用错误的学习?
How can we build more just machine learning systems? To answer this question, we need to know both what justice is and how to tell whether one system is more or less just than another. That is, we need both a definition and a measure of justice. Theories of distributive justice hold that justice can be measured (in part) in terms of the fair distribution of benefits and burdens across people in society. Recently, the field known as fair machine learning has turned to John Rawls's theory of distributive justice for inspiration and operationalization. However, philosophers known as capability theorists have long argued that Rawls's theory uses the wrong measure of justice, thereby encoding biases against people with disabilities. If these theorists are right, is it possible to operationalize Rawls's theory in machine learning systems without also encoding its biases? In this paper, I draw on examples from fair machine learning to suggest that the answer to this question is no: the capability theorists' arguments against Rawls's theory carry over into machine learning systems. But capability theorists don't only argue that Rawls's theory uses the wrong measure, they also offer an alternative measure. Which measure of justice is right? And has fair machine learning been using the wrong one?