论文标题

欺骗:与机器学习模型的反事实解释的决策探索者

DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models

论文作者

Cheng, Furui, Ming, Yao, Qu, Huamin

论文摘要

随着机器学习模型越来越多地应用于各种决策方案,人们花了越来越多的努力使机器学习模型更加透明和可以解释。在各种解释技术中,反事实解释具有对人类友好且可操作的优点 - 反事实解释告诉用户如何以最小的变化来获得所需的预测。此外,反事实解释也可以作为模型决策的有效探针。在这项工作中,我们利用反事实解释的潜力来理解和探索机器学习模型的行为。我们设计欺骗,这是一个交互式可视化系统,可帮助理解和探索模型对单个实例和数据子集的决策,从而支持从决策对象到模型开发人员的用户。欺骗通过结合实例和亚组级别的反事实解释的优势来支持模型决策的探索性分析。我们还介绍了一系列交互,使用户能够自定义反事实解释的产生,以找到更适合其需求的可行的解释。通过三种用例和专家访谈,我们证明了欺骗在支持决策探索任务和实例解释方面的有效性。

With machine learning models being increasingly applied to various decision-making scenarios, people have spent growing efforts to make machine learning models more transparent and explainable. Among various explanation techniques, counterfactual explanations have the advantages of being human-friendly and actionable -- a counterfactual explanation tells the user how to gain the desired prediction with minimal changes to the input. Besides, counterfactual explanations can also serve as efficient probes to the models' decisions. In this work, we exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models. We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets, supporting users ranging from decision-subjects to model developers. DECE supports exploratory analysis of model decisions by combining the strengths of counterfactual explanations at instance- and subgroup-levels. We also introduce a set of interactions that enable users to customize the generation of counterfactual explanations to find more actionable ones that can suit their needs. Through three use cases and an expert interview, we demonstrate the effectiveness of DECE in supporting decision exploration tasks and instance explanations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源