论文标题
利润:多个班级的一致和忠实的代孕解释
LIMEtree: Consistent and Faithful Surrogate Explanations of Multiple Classes
论文作者
论文摘要
可解释的人工智能提供了更好地理解预测模型及其决策的工具,但是许多这样的方法仅限于产生有关单个类别的见解。在为几个课程产生解释时,要获得全面观点的推理可能很困难,因为它们可以提供竞争或矛盾的证据。为了应对这一挑战,我们介绍了多级解释的新颖范式。我们概述了此类技术背后的理论,并提出了一个基于多输出回归树的局部替代模型(称为limetree),该模型为单个预测提供了忠实且一致的解释,同时成为事后,模型 - 敏捷和数据词。除了坚定的忠诚保证外,我们的实施还提供了一系列不同的解释类型,包括文献中有利于事实的陈述。我们通过定量实验以及通过PILOT用户研究,图像和表格数据分类任务评估了有关解释性的算法,并将其与Lime进行了比较,该任务是最先进的替代解释器。我们的贡献证明了多级解释的好处和我们方法在各种场景中的广泛优势。
Explainable artificial intelligence provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class. When generating explanations for several classes, reasoning over them to obtain a comprehensive view may be difficult since they can present competing or contradictory evidence. To address this challenge we introduce the novel paradigm of multi-class explanations. We outline the theory behind such techniques and propose a local surrogate model based on multi-output regression trees -- called LIMEtree -- that offers faithful and consistent explanations of multiple classes for individual predictions while being post-hoc, model-agnostic and data-universal. On top of strong fidelity guarantees, our implementation delivers a range of diverse explanation types, including counterfactual statements favoured in the literature. We evaluate our algorithm with respect to explainability desiderata, through quantitative experiments and via a pilot user study, on image and tabular data classification tasks, comparing it to LIME, which is a state-of-the-art surrogate explainer. Our contributions demonstrate the benefits of multi-class explanations and wide-ranging advantages of our method across a diverse set of scenarios.