论文标题
机器学习预测的解释:应用程序的强制性步骤
Explanations of Machine Learning predictions: a mandatory step for its application to Operational Processes
论文作者
论文摘要
在全球经济中,信贷公司通过作为货币贷方的活动在经济发展中发挥了核心作用。这项重要的任务带来了一些缺点,主要是债务人无法偿还所提供信用的风险。因此,信用风险建模(CRM),即评估债务人不会偿还应付金额的概率,扮演着重要角色。统计方法从长期以来就被成功利用,成为CRM最常用的方法。最近,机器和深度学习技术也已应用于CRM任务,显示了预测质量和性能的重要提高。但是,这种技术通常无法为他们提出的分数提供可靠的解释。结果,许多机器和深度学习技术无法遵守西方国家的法规,例如GDPR。在本文中,我们建议使用石灰(本地可解释的模型不足的解释)技术来解决该领域的解释性问题,我们在真正的信用风险数据集中显示了其就业机会,并最终讨论其健全性和必要的改进,以确保其采用和遵守任务。
In the global economy, credit companies play a central role in economic development, through their activity as money lenders. This important task comes with some drawbacks, mainly the risk of the debtors not being able to repay the provided credit. Therefore, Credit Risk Modelling (CRM), namely the evaluation of the probability that a debtor will not repay the due amount, plays a paramount role. Statistical approaches have been successfully exploited since long, becoming the most used methods for CRM. Recently, also machine and deep learning techniques have been applied to the CRM task, showing an important increase in prediction quality and performances. However, such techniques usually do not provide reliable explanations for the scores they come up with. As a consequence, many machine and deep learning techniques fail to comply with western countries' regulations such as, for example, GDPR. In this paper we suggest to use LIME (Local Interpretable Model-agnostic Explanations) technique to tackle the explainability problem in this field, we show its employment on a real credit-risk dataset and eventually discuss its soundness and the necessary improvements to guarantee its adoption and compliance with the task.