论文标题
来自SVM模型的白色框归纳:可解释的AI和逻辑编程
White-box Induction From SVM Models: Explainable AI with Logic Programming
论文作者
论文摘要
我们专注于诱导逻辑程序的问题,这些程序解释了支持向量机(SVM)算法所学的模型。自上而下的顺序涵盖了电感逻辑编程(ILP)算法(例如,铝箔)使用信息理论中的启发式方法应用爬山搜索。这类算法的一个主要问题正在陷入本地最佳状态。但是,在我们的新方法中,首先对全球最佳的SVM模型进行了培训,将与数据相关的爬山搜索取代,然后将算法视为模型中最有影响力的数据点,并诱导一个条款,该条款涵盖了支持向量的支持向量和支持点。我们的算法不是定义固定的假设搜索空间,而是利用Shap(可解释的AI中的特定于示例的解释器)来确定相关特征集。这种方法产生了一种算法,该算法捕获了SVM模型的基本逻辑,并且优于%GG:箔算法 - >其他ILP算法其他ILP算法,就诱导的条款和分类评估衡量量的数量而言。本文正在考虑“逻辑编程的理论和实践”杂志上发表。
We focus on the problem of inducing logic programs that explain models learned by the support vector machine (SVM) algorithm. The top-down sequential covering inductive logic programming (ILP) algorithms (e.g., FOIL) apply hill-climbing search using heuristics from information theory. A major issue with this class of algorithms is getting stuck in a local optimum. In our new approach, however, the data-dependent hill-climbing search is replaced with a model-dependent search where a globally optimal SVM model is trained first, then the algorithm looks into support vectors as the most influential data points in the model, and induces a clause that would cover the support vector and points that are most similar to that support vector. Instead of defining a fixed hypothesis search space, our algorithm makes use of SHAP, an example-specific interpreter in explainable AI, to determine a relevant set of features. This approach yields an algorithm that captures SVM model's underlying logic and outperforms %GG: the FOIL algorithm --> other ILP algorithms other ILP algorithms in terms of the number of induced clauses and classification evaluation metrics. This paper is under consideration for publication in the journal of "Theory and practice of logic programming".