论文标题
任务不合时宜的图形说明
Task-Agnostic Graph Explanations
论文作者
论文摘要
图形神经网络(GNN)已成为编码图形结构数据的强大工具。由于其广泛的应用程序,越来越需要开发工具来解释GNN如何决定给定的图形结构数据。现有的基于学习的GNN解释方法在培训中是特定于任务的,因此遭受了关键的缺点。具体而言,它们无法用单个解释器为多任务预测模型产生解释。他们也无法在以自我监督的方式对GNN进行训练并且在未来下游任务中使用所得的表示形式。为了解决这些局限性,我们提出了一个任务不合时宜的GNN解释器(TAGE),该解释器(tage)独立于下游模型,并在自学人员下受过训练,而对下游任务不了解。 Tage可以通过看不见的下游任务来解释GNN嵌入模型,并可以有效解释多任务模型。我们的广泛实验表明,通过使用相同的模型来解释多个下游任务的预测,同时实现了比当前最新的GNN解释方法,可以显着提高解释效率。我们的代码可公开作为DIG库的一部分,网址为https://github.com/divelab/dig/tree/main/main/dig/xgraph/tage/。
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph-structured data. Due to their broad applications, there is an increasing need to develop tools to explain how GNNs make decisions given graph-structured data. Existing learning-based GNN explanation approaches are task-specific in training and hence suffer from crucial drawbacks. Specifically, they are incapable of producing explanations for a multitask prediction model with a single explainer. They are also unable to provide explanations in cases where the GNN is trained in a self-supervised manner, and the resulting representations are used in future downstream tasks. To address these limitations, we propose a Task-Agnostic GNN Explainer (TAGE) that is independent of downstream models and trained under self-supervision with no knowledge of downstream tasks. TAGE enables the explanation of GNN embedding models with unseen downstream tasks and allows efficient explanation of multitask models. Our extensive experiments show that TAGE can significantly speed up the explanation efficiency by using the same model to explain predictions for multiple downstream tasks while achieving explanation quality as good as or even better than current state-of-the-art GNN explanation approaches. Our code is pubicly available as part of the DIG library at https://github.com/divelab/DIG/tree/main/dig/xgraph/TAGE/.