论文标题
邻居2seq:通过将邻居转换为序列,深入学习大量图
Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors to Sequences
论文作者
论文摘要
现代图神经网络(GNNS)使用消息传递方案,并在许多领域取得了巨大的成功。但是,这种递归设计固有地导致了过度的计算和内存要求,因此不适用于大量现实图形。在这项工作中,我们建议邻居2seq将每个节点的层次结构转换为一个序列。这种新颖的转换使得随后针对一般深度学习操作(例如卷积和关注)进行了随后的小批量培训,这些培训是为网格样数据而设计的,并且在各个领域都显示出强大的功能。因此,我们的邻居2seq自然会通过对邻居2seq变换进行预测,使GNN具有深度学习操作对网格样数据的效率和优势。我们在大量图上评估了我们的方法,其中超过1.1亿个节点和16亿个边缘以及几个中尺度图。结果表明,我们所提出的方法可扩展到大量图形,并在大量和中等尺度的图中实现出色的性能。我们的代码可在https://github.com/divelab/neighbor2seq上找到。
Modern graph neural networks (GNNs) use a message passing scheme and have achieved great success in many fields. However, this recursive design inherently leads to excessive computation and memory requirements, making it not applicable to massive real-world graphs. In this work, we propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence. This novel transformation enables the subsequent mini-batch training for general deep learning operations, such as convolution and attention, that are designed for grid-like data and are shown to be powerful in various domains. Therefore, our Neighbor2Seq naturally endows GNNs with the efficiency and advantages of deep learning operations on grid-like data by precomputing the Neighbor2Seq transformations. We evaluate our method on a massive graph, with more than 111 million nodes and 1.6 billion edges, as well as several medium-scale graphs. Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs. Our code is available at https://github.com/divelab/Neighbor2Seq.