论文标题
图和图形神经网络稳定性
Graph and graphon neural network stability
论文作者
论文摘要
图形神经网络(GNN)是学习体系结构,依靠图形结构的知识来生成大规模网络数据的有意义表示。因此,GNN稳定性很重要,因为在实际情况下,通常存在与图相关的不确定性。我们使用称为图形子的内核对象分析GNN稳定性。图形既是收敛图序列的限制,又是确定性和随机图的生成模型。在基于Graphon信号处理的理论的基础上,我们定义了Graphon神经网络并分析其对Graphon扰动的稳定性。然后,我们通过将Graphon神经网络解释为从原始和扰动图形实例化的确定性和随机图上的GNN的生成模型来扩展此分析。我们观察到,GNNs稳定在Graphon扰动中,其稳定性结合,随着图的大小渐近地减小。在电影推荐的实验中进一步证明了这种渐近行为。
Graph neural networks (GNNs) are learning architectures that rely on knowledge of the graph structure to generate meaningful representations of large-scale network data. GNN stability is thus important as in real-world scenarios there are typically uncertainties associated with the graph. We analyze GNN stability using kernel objects called graphons. Graphons are both limits of convergent graph sequences and generating models for deterministic and stochastic graphs. Building upon the theory of graphon signal processing, we define graphon neural networks and analyze their stability to graphon perturbations. We then extend this analysis by interpreting the graphon neural network as a generating model for GNNs on deterministic and stochastic graphs instantiated from the original and perturbed graphons. We observe that GNNs are stable to graphon perturbations with a stability bound that decreases asymptotically with the size of the graph. This asymptotic behavior is further demonstrated in an experiment of movie recommendation.