论文标题

考虑相互作用效果的形状解释结肠癌的机器学习模型

Explanation of Machine Learning Models of Colon Cancer Using SHAP Considering Interaction Effects

论文作者

Nohara, Yasunobu, Inoguchi, Toyoshi, Nojiri, Chinatsu, Nakashima, Naoki

论文摘要

在决策过程中使用机器学习技术时,模型的解释性很重要。 Shapley添加说明(SHAP)是机器学习模型最有前途的解释方法之一。当一个变量的效果取决于另一个变量的值时,就会发生交互作用。即使每个变量对结果几乎没有影响,其组合也会对结果产生大量影响。了解互动对于理解机器学习模型很重要。但是,天真的外形分析无法区分主要效果和相互作用效果。在本文中,我们将Shapley-Taylor索引作为一种解释方法,用于使用Shap考虑相互作用效果的机器学习模型。我们将方法应用于京都大学医院的癌症队列数据(n = 29,080),以分析哪些因素组合有助于结肠癌的风险。

When using machine learning techniques in decision-making processes, the interpretability of the models is important. Shapley additive explanation (SHAP) is one of the most promising interpretation methods for machine learning models. Interaction effects occur when the effect of one variable depends on the value of another variable. Even if each variable has little effect on the outcome, its combination can have an unexpectedly large impact on the outcome. Understanding interactions is important for understanding machine learning models; however, naive SHAP analysis cannot distinguish between the main effect and interaction effects. In this paper, we introduce the Shapley-Taylor index as an interpretation method for machine learning models using SHAP considering interaction effects. We apply the method to the cancer cohort data of Kyushu University Hospital (N=29,080) to analyze what combination of factors contributes to the risk of colon cancer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源