论文标题

动态图的即时图形神经网络

Instant Graph Neural Networks for Dynamic Graphs

论文作者

Zheng, Yanping, Wang, Hanzhi, Wei, Zhewei, Liu, Jiajun, Wang, Sibo

论文摘要

图形神经网络(GNN)已被广泛用于建模图形结构化数据。随着众多GNN变体的发展,近年来见证了开创性的结果,可以提高GNN的可扩展性,以在数百万个节点上使用静态图。但是,如何用GNN即时表示大规模动态图的连续变化仍然是一个开放的问题。现有的动态GNN专注于建模图的周期性演变,通常是在快照的基础上进行建模。这样的方法遭受了两个缺点:首先,将图表的变化反映在图表表示中有很大的延迟,从而导致模型的准确性损失。其次,重复计算每个快照中整个图的表示矩阵主要是耗时的,并且严重限制了可扩展性。在本文中,我们提出了即时图形神经网络(InstantGnn),这是动态图的图表示矩阵的增量计算方法。我们的方法避免了耗时,重复的计算,并允许对表示形式和即时预测进行即时更新。都支持具有动态结构和动态属性的图。还提供了这些更新的时间复杂性的上限。此外,我们的方法提供了一种自适应培训策略,该策略指导模型在可以带来最大的性能增长时进行重新训练。我们对几个现实世界和合成数据集进行了广泛的实验。经验结果表明,我们的模型可以达到最新的准确性,同时具有比现有方法高的效率更高。

Graph Neural Networks (GNNs) have been widely used for modeling graph-structured data. With the development of numerous GNN variants, recent years have witnessed groundbreaking results in improving the scalability of GNNs to work on static graphs with millions of nodes. However, how to instantly represent continuous changes of large-scale dynamic graphs with GNNs is still an open problem. Existing dynamic GNNs focus on modeling the periodic evolution of graphs, often on a snapshot basis. Such methods suffer from two drawbacks: first, there is a substantial delay for the changes in the graph to be reflected in the graph representations, resulting in losses on the model's accuracy; second, repeatedly calculating the representation matrix on the entire graph in each snapshot is predominantly time-consuming and severely limits the scalability. In this paper, we propose Instant Graph Neural Network (InstantGNN), an incremental computation approach for the graph representation matrix of dynamic graphs. Set to work with dynamic graphs with the edge-arrival model, our method avoids time-consuming, repetitive computations and allows instant updates on the representation and instant predictions. Graphs with dynamic structures and dynamic attributes are both supported. The upper bounds of time complexity of those updates are also provided. Furthermore, our method provides an adaptive training strategy, which guides the model to retrain at moments when it can make the greatest performance gains. We conduct extensive experiments on several real-world and synthetic datasets. Empirical results demonstrate that our model achieves state-of-the-art accuracy while having orders-of-magnitude higher efficiency than existing methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源