论文标题

增益:用于诱导性半监督学习的图形注意力和交互网络在大规模图上

GAIN: Graph Attention & Interaction Network for Inductive Semi-Supervised Learning over Large-scale Graphs

论文作者

Weng, Yunpeng, Chen, Xu, Chen, Liang, Liu, Wei

论文摘要

图形神经网络(GNN)已在各种机器学习任务(例如建议,节点分类和链接预测)上提高了最新性能。图形神经网络模型通过将节点特征与汇总的相邻节点信息合并来生成节点嵌入。大多数现有的GNN模型利用一种类型的聚合器(例如,平均流动)来汇总相邻的节点信息,然后将聚合器的输出添加或串联到中心节点的当前表示向量中。但是,仅使用一种类型的聚合器很难捕获相邻信息的不同方面,而简单的添加或串联更新方法限制了GNN的表达能力。不仅如此,现有的监督或半监督的GNN模型是根据节点标签的损耗函数训练的,这导致忽略图形结构信息。在本文中,我们提出了一种新型的图形神经网络体系结构,图形注意\&交互网络(增益),用于图形上的诱导学习。与以前仅使用一种类型的聚合方法的GNN模型不同,我们使用多种类型的聚合器在不同方面收集相邻信息,并通过聚合器级的注意机制集成这些聚合器的输出。此外,我们设计了图形的正规损失,以更好地捕获图中节点的拓扑关系。此外,我们首先介绍图形交互的概念,并提出矢量明确的特征交互机制来更新节点嵌入。我们对两个节点分类基准和一个现实世界的金融新闻数据集进行了全面的实验。实验表明,我们的增益模型在所有任务上都优于当前最新性能。

Graph Neural Networks (GNNs) have led to state-of-the-art performance on a variety of machine learning tasks such as recommendation, node classification and link prediction. Graph neural network models generate node embeddings by merging nodes features with the aggregated neighboring nodes information. Most existing GNN models exploit a single type of aggregator (e.g., mean-pooling) to aggregate neighboring nodes information, and then add or concatenate the output of aggregator to the current representation vector of the center node. However, using only a single type of aggregator is difficult to capture the different aspects of neighboring information and the simple addition or concatenation update methods limit the expressive capability of GNNs. Not only that, existing supervised or semi-supervised GNN models are trained based on the loss function of the node label, which leads to the neglect of graph structure information. In this paper, we propose a novel graph neural network architecture, Graph Attention \& Interaction Network (GAIN), for inductive learning on graphs. Unlike the previous GNN models that only utilize a single type of aggregation method, we use multiple types of aggregators to gather neighboring information in different aspects and integrate the outputs of these aggregators through the aggregator-level attention mechanism. Furthermore, we design a graph regularized loss to better capture the topological relationship of the nodes in the graph. Additionally, we first present the concept of graph feature interaction and propose a vector-wise explicit feature interaction mechanism to update the node embeddings. We conduct comprehensive experiments on two node-classification benchmarks and a real-world financial news dataset. The experiments demonstrate our GAIN model outperforms current state-of-the-art performances on all the tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源