论文标题

解释决策树

On Explaining Decision Trees

论文作者

Izza, Yacine, Ignatiev, Alexey, Marques-Silva, Joao

论文摘要

决策树(DTS)代表了被称为可解释的机器学习(ML)模型的缩影。这是由于DTS中的路径通常比功能总数小得多的动机。本文表明,在某些设置中,dts几乎不可解释,而在DT中的路径比PI解释大于PI型,即需要预测的子集最小特征值集。结果,本文提出了一个用于计算DTS的PI解释的新型模型,该模型可以在多项式时间内计算一个PI解释。此外,证明PI解释的列举可以简化为最小击球集的枚举。实验结果是在具有众所周知的DT学习工具的各种公共可用数据集上获得的,并确认在大多数情况下,DTS具有适当的PI-Explanation的路径。

Decision trees (DTs) epitomize what have become to be known as interpretable machine learning (ML) models. This is informally motivated by paths in DTs being often much smaller than the total number of features. This paper shows that in some settings DTs can hardly be deemed interpretable, with paths in a DT being arbitrarily larger than a PI-explanation, i.e. a subset-minimal set of feature values that entails the prediction. As a result, the paper proposes a novel model for computing PI-explanations of DTs, which enables computing one PI-explanation in polynomial time. Moreover, it is shown that enumeration of PI-explanations can be reduced to the enumeration of minimal hitting sets. Experimental results were obtained on a wide range of publicly available datasets with well-known DT-learning tools, and confirm that in most cases DTs have paths that are proper supersets of PI-explanations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源