论文标题
使用知识蒸馏来改善零售银行背景下的可解释模型
Using Knowledge Distillation to improve interpretable models in a retail banking context
论文作者
论文摘要
本文阐述了知识蒸馏技术的审查,重点是其对零售银行环境的适用性。在银行环境中使用的预测机学习算法,尤其是在风险和控制功能方面,通常受到监管和技术限制,从而限制了其复杂性。知识蒸馏提供了利用其他通常更复杂且表现更好的模型的结果来改善简单模型的性能而不会负担其应用的机会。在该领域的最新进展中,我们重点介绍了三种主要方法:软目标,样本选择和数据增强。我们通过将其应用于开源数据集来评估这些技术子集的相关性,然后将其对零售银行部门的主要法国机构BPCE进行测试。因此,我们证明了知识蒸馏的潜力,可以改善这些模型的性能,而不会改变其形式和简单性。
This article sets forth a review of knowledge distillation techniques with a focus on their applicability to retail banking contexts. Predictive machine learning algorithms used in banking environments, especially in risk and control functions, are generally subject to regulatory and technical constraints limiting their complexity. Knowledge distillation gives the opportunity to improve the performances of simple models without burdening their application, using the results of other - generally more complex and better-performing - models. Parsing recent advances in this field, we highlight three main approaches: Soft Targets, Sample Selection and Data Augmentation. We assess the relevance of a subset of such techniques by applying them to open source datasets, before putting them to the test on the use cases of BPCE, a major French institution in the retail banking sector. As such, we demonstrate the potential of knowledge distillation to improve the performance of these models without altering their form and simplicity.