论文标题
从合适的模型中推论课堂的社区
Deducing neighborhoods of classes from a fitted model
论文作者
论文摘要
在当今的世界中,对巨大数据集非常复杂的模型的要求正在稳步上升。这些模型的问题在于,通过提高模型的复杂性,解释它们变得更加困难。 \ emph {可解释的机器学习}的不断增长的领域试图通过使用可以帮助更好地理解这些模型的特定技术来弥补这些复杂(甚至是BlackBox-)模型中缺乏可解释性。在本文中,提出了一种新型的可解释的机器学习方法,这可以帮助您使用分位数转移将特征空间分配到分类模型中的预测类中。为了说明在哪种情况下,这种分位数移位方法(QSM)可能会变得有益,将其应用于理论医学示例和真实的数据示例。基本上,使用了实际数据点(或特定的兴趣点),并观察到略微升高或降低特定特征后预测的变化。通过比较操纵前后的预测,在某些条件下,预测中观察到的变化可以解释为在操纵特征方面的阶层。 Chordgraphs用于可视化观察到的变化。
In todays world the request for very complex models for huge data sets is rising steadily. The problem with these models is that by raising the complexity of the models, it gets much harder to interpret them. The growing field of \emph{interpretable machine learning} tries to make up for the lack of interpretability in these complex (or even blackbox-)models by using specific techniques that can help to understand those models better. In this article a new kind of interpretable machine learning method is presented, which can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts. To illustrate in which situations this quantile shift method (QSM) could become beneficial, it is applied to a theoretical medical example and a real data example. Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed. By comparing the predictions before and after the manipulations, under certain conditions the observed changes in the predictions can be interpreted as neighborhoods of the classes with regard to the manipulated features. Chordgraphs are used to visualize the observed changes.