论文标题
大规模改善图形神经网络:结合近似Pagerank和Corerank
Improving Graph Neural Networks at Scale: Combining Approximate PageRank and CoreRank
论文作者
论文摘要
图形神经网络(GNN)在图形结构上执行的许多学习任务中取得了巨大的成功。但是,传播信息依赖于消息传递方案,该方案在使用工业规模的图表时可能会变得非常昂贵。受PPRGO模型的启发,我们提出了CorePPR模型,这是一种可扩展的解决方案,它利用了近似个性化的Pagerank和Corerank的可学习凸组合,以扩散GNN中的多跳社区信息。此外,我们结合了一种动态机制,以选择特定节点的最具影响力的邻居,该节点在保留模型的性能的同时减少了训练时间。总体而言,我们证明了CorePPR优于PPRGO,尤其是在选择最有影响力的节点与可伸缩性特别相关的大图上。我们的代码可在以下网址公开获取:https://github.com/arielramos97/coreppr。
Graph Neural Networks (GNNs) have achieved great successes in many learning tasks performed on graph structures. Nonetheless, to propagate information GNNs rely on a message passing scheme which can become prohibitively expensive when working with industrial-scale graphs. Inspired by the PPRGo model, we propose the CorePPR model, a scalable solution that utilises a learnable convex combination of the approximate personalised PageRank and the CoreRank to diffuse multi-hop neighbourhood information in GNNs. Additionally, we incorporate a dynamic mechanism to select the most influential neighbours for a particular node which reduces training time while preserving the performance of the model. Overall, we demonstrate that CorePPR outperforms PPRGo, particularly on large graphs where selecting the most influential nodes is particularly relevant for scalability. Our code is publicly available at: https://github.com/arielramos97/CorePPR.