论文标题
信用评分模型的公平性
The Fairness of Credit Scoring Models
论文作者
论文摘要
在信贷市场中,筛选算法旨在区分好型和坏型借款人。但是,在这样做时,他们还可以区分共享受保护属性的个人(例如性别,年龄,种族来源)和其他人口。这可能是无意的,源自培训数据集或模型本身。我们展示了如何正式测试评分模型的算法公平性以及如何确定导致任何缺乏公平性的变量。然后,我们使用这些变量来优化公平性的绩效权衡。我们的框架提供了有关算法公平性如何由贷方监控的算法公平性的指导,该贷方由其监管机构控制,改进了受保护群体的利益,同时仍保持高度的预测准确性。
In credit markets, screening algorithms aim to discriminate between good-type and bad-type borrowers. However, when doing so, they can also discriminate between individuals sharing a protected attribute (e.g. gender, age, racial origin) and the rest of the population. This can be unintentional and originate from the training dataset or from the model itself. We show how to formally test the algorithmic fairness of scoring models and how to identify the variables responsible for any lack of fairness. We then use these variables to optimize the fairness-performance trade-off. Our framework provides guidance on how algorithmic fairness can be monitored by lenders, controlled by their regulators, improved for the benefit of protected groups, while still maintaining a high level of forecasting accuracy.