论文标题
通过增量学习来维持公平
Sustaining Fairness via Incremental Learning
论文作者
论文摘要
机器学习系统通常用于做出关键决策,例如信用贷款,招聘等。在做出决策时,这些系统通常会在其中间表示中对用户的人口统计信息(例如性别,年龄)进行编码。这可能导致对特定人口统计的偏见的决定。先前的工作重点是中间表示,以确保公正的决策。但是,随着任务或人口统计分布的变化,这些方法无法保持公平。为了确保野外的公平性,对于系统,它以逐步访问新数据的方式适应此类更改很重要。在这项工作中,我们建议通过在增量学习环境中介绍学习公平表示的问题来解决这个问题。为此,我们介绍了公平感知的增量表示学习(FAIRL),这是一种代表学习系统,可以维持公平,同时逐步学习新任务。 Fairl能够通过控制学习表示的速度延伸功能来实现公平并学习新任务。我们的经验评估表明,Fairl能够在目标任务上实现高性能的同时做出公正的决定,表现优于几个基线。
Machine learning systems are often deployed for making critical decisions like credit lending, hiring, etc. While making decisions, such systems often encode the user's demographic information (like gender, age) in their intermediate representations. This can lead to decisions that are biased towards specific demographics. Prior work has focused on debiasing intermediate representations to ensure fair decisions. However, these approaches fail to remain fair with changes in the task or demographic distribution. To ensure fairness in the wild, it is important for a system to adapt to such changes as it accesses new data in an incremental fashion. In this work, we propose to address this issue by introducing the problem of learning fair representations in an incremental learning setting. To this end, we present Fairness-aware Incremental Representation Learning (FaIRL), a representation learning system that can sustain fairness while incrementally learning new tasks. FaIRL is able to achieve fairness and learn new tasks by controlling the rate-distortion function of the learned representations. Our empirical evaluations show that FaIRL is able to make fair decisions while achieving high performance on the target task, outperforming several baselines.