论文标题
在算法中均衡信用机会:将算法公平研究与美国公平贷款法规保持一致
Equalizing Credit Opportunity in Algorithms: Aligning Algorithmic Fairness Research with U.S. Fair Lending Regulation
论文作者
论文摘要
信用是美国金融健康的重要组成部分,而获得不平等的是当今存在的人口群体之间经济差异的一个很大一部分。如今,机器学习算法有时越来越多地接受了替代数据的培训,但正在越来越多地用于确定获得信贷的访问,但研究表明,机器学习可以编码许多不同版本的“不公平”,从而引起了银行和其他金融机构的关注,即通过使用这项技术,可能无意外地 - 潜在地参与非法歧视。在美国,有适当的法律来确保在贷款和指控执行这些贷款的机构中不会发生歧视。但是,围绕计算机科学和政策中的公平信用模型的对话通常是错误对准的:公平的机器学习研究通常缺乏针对现有公平贷款政策的法律和实际考虑,而监管机构尚未发布有关如何利用信用风险模型的新指南,该指南应采用研究社区的实践和技术。本文旨在更好地调整对话的这些方面。我们描述了美国当前的信用歧视法规的状态,将公平ML研究的结果与之相关,以确定通过在贷款中使用机器学习而提出的特定公平关注,并讨论解决这些问题的监管机会。
Credit is an essential component of financial wellbeing in America, and unequal access to it is a large factor in the economic disparities between demographic groups that exist today. Today, machine learning algorithms, sometimes trained on alternative data, are increasingly being used to determine access to credit, yet research has shown that machine learning can encode many different versions of "unfairness," thus raising the concern that banks and other financial institutions could -- potentially unwittingly -- engage in illegal discrimination through the use of this technology. In the US, there are laws in place to make sure discrimination does not happen in lending and agencies charged with enforcing them. However, conversations around fair credit models in computer science and in policy are often misaligned: fair machine learning research often lacks legal and practical considerations specific to existing fair lending policy, and regulators have yet to issue new guidance on how, if at all, credit risk models should be utilizing practices and techniques from the research community. This paper aims to better align these sides of the conversation. We describe the current state of credit discrimination regulation in the United States, contextualize results from fair ML research to identify the specific fairness concerns raised by the use of machine learning in lending, and discuss regulatory opportunities to address these concerns.