论文标题
使用层的相关性传播来解释结构化数据的深度学习模型
Explaining Deep Learning Models for Structured Data using Layer-Wise Relevance Propagation
论文作者
论文摘要
机器学习模型中的信任和信誉是通过模型解释其指定的能力来支持的。尽管深度学习模型的解释性是一个众所周知的挑战,但进一步的挑战是解释本身的清晰度,必须由下游用户解释。图层 - 宽道传播(LRP)是一种针对深层模型视觉开发的已建立的解释性技术,它提供了直观的人类可读热图的输入图像。我们首次使用深层神经网络(1D-CNN)的结构化数据集介绍了LRP的新颖性,以进行信用卡欺诈检测和电信客户Churn Churn预测数据集。我们展示了LRPI如何比传统的解释性概念更有效地解释了局部可解释的模型不合时宜的前植物(Lime)和Shapley添加性解释(SHAP),以提供解释性。在整个测试集中,这种有效性既可以局部到样本水平,又是整体。我们还讨论了LRP(1-2S)比石灰(22s)和Shap(108s)的显着计算时间优势,因此在实时应用方案中其稳定性。此外,我们对LRP的验证强调了特征模型性能,从而为使用XAI用作特征特征子集选择的新研究领域开放
Trust and credibility in machine learning models is bolstered by the ability of a model to explain itsdecisions. While explainability of deep learning models is a well-known challenge, a further chal-lenge is clarity of the explanation itself, which must be interpreted by downstream users. Layer-wiseRelevance Propagation (LRP), an established explainability technique developed for deep models incomputer vision, provides intuitive human-readable heat maps of input images. We present the novelapplication of LRP for the first time with structured datasets using a deep neural network (1D-CNN),for Credit Card Fraud detection and Telecom Customer Churn prediction datasets. We show how LRPis more effective than traditional explainability concepts of Local Interpretable Model-agnostic Ex-planations (LIME) and Shapley Additive Explanations (SHAP) for explainability. This effectivenessis both local to a sample level and holistic over the whole testing set. We also discuss the significantcomputational time advantage of LRP (1-2s) over LIME (22s) and SHAP (108s), and thus its poten-tial for real time application scenarios. In addition, our validation of LRP has highlighted features forenhancing model performance, thus opening up a new area of research of using XAI as an approachfor feature subset selection