论文标题
PAC-BAYESIAN用于图形神经网络的概括界的方法
A PAC-Bayesian Approach to Generalization Bounds for Graph Neural Networks
论文作者
论文摘要
在本文中,我们通过PAC-Bayesian方法得出了两个主要类别神经网络(GNN)的概括范围,即图形卷积网络(GCN)和消息传递GNNS(MPGNN)。我们的结果表明,权重的最大节点程度和光谱规范决定了这两个模型的概括界限。我们还表明,我们对GCN的界限是对ARXIV中开发的结果的自然概括:1707.09564V2 [CS.LG]用于完全连接和卷积神经网络。对于消息传递GNN,我们的pac-bayes结合改善了基于arXiv中的rademacher复杂性:2002.06157V1 [cs.lg],显示了对最大节点度和最大隐藏尺寸的更严格依赖性。我们证明的关键要素是对GNN的扰动分析以及Pac-Bayes分析对非均匀GNN的概括。我们对几个现实世界图数据集进行了一项实证研究,并验证我们的pac-bayes绑定比其他人更紧密。
In this paper, we derive generalization bounds for the two primary classes of graph neural networks (GNNs), namely graph convolutional networks (GCNs) and message passing GNNs (MPGNNs), via a PAC-Bayesian approach. Our result reveals that the maximum node degree and spectral norm of the weights govern the generalization bounds of both models. We also show that our bound for GCNs is a natural generalization of the results developed in arXiv:1707.09564v2 [cs.LG] for fully-connected and convolutional neural networks. For message passing GNNs, our PAC-Bayes bound improves over the Rademacher complexity based bound in arXiv:2002.06157v1 [cs.LG], showing a tighter dependency on the maximum node degree and the maximum hidden dimension. The key ingredients of our proofs are a perturbation analysis of GNNs and the generalization of PAC-Bayes analysis to non-homogeneous GNNs. We perform an empirical study on several real-world graph datasets and verify that our PAC-Bayes bound is tighter than others.