论文标题
通过成人数据集的统计平等棱镜对机器学习的偏见调查
A survey of bias in Machine Learning through the prism of Statistical Parity for the Adult Data Set
论文作者
论文摘要
基于机器学习模型的应用已成为日常生活和专业世界中必不可少的一部分。当时人口中最近出现的一个关键问题:算法决定是否传达了针对特定人口或少数群体群体的任何类型的歧视?在本文中,我们展示了了解如何将偏见引入自动决策的重要性。我们首先提出了公平学习问题的数学框架,特别是在二进制分类设置中。然后,我们建议通过对真实且众所周知的成人收入数据集使用标准不同影响指数来量化偏见的存在。最后,我们检查旨在减少二进制分类结果偏见的不同方法的性能。重要的是,我们表明某些直观方法无效。这阐明了试图使公平机器学习模型的事实可能是一项特别具有挑战性的任务,特别是当培训观察结果含有偏见时。
Applications based on Machine Learning models have now become an indispensable part of the everyday life and the professional world. A critical question then recently arised among the population: Do algorithmic decisions convey any type of discrimination against specific groups of population or minorities? In this paper, we show the importance of understanding how a bias can be introduced into automatic decisions. We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting. We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set. Finally, we check the performance of different approaches aiming to reduce the bias in binary classification outcomes. Importantly, we show that some intuitive methods are ineffective. This sheds light on the fact trying to make fair machine learning models may be a particularly challenging task, in particular when the training observations contain a bias.