论文标题

稳定且可转移的超图神经网络

Stable and Transferable Hyper-Graph Neural Networks

论文作者

Hayhoe, Mikhail, Riess, Hans, Preciado, Victor M., Ribeiro, Alejandro

论文摘要

我们介绍了通过图神经网络(GNN)在超图上支持的处理信号的体系结构,我们称之为Hyper-Graph扩展神经网络(HENN),并提供了有关HyperGraph信号处理模型的稳定性和可传递性误差的第一范围。为此,我们提供了一个框架,以通过频谱相似性界定跨任意图的GNN的稳定性和可传递性误差。通过通过其特征值光谱界定两个图形偏移算子(GSO)之间的差异,我们表明此误差仅取决于GNN的属性和GSO的光谱相似性的幅度。此外,我们表明,假定图形是彼此的小扰动的现有可传递性结果,或者这些图是随机的,并且可以从同一分布中绘制或从同一图形中抽取,可以使用我们的方法恢复。因此,随着图形变大,GNN和我们的HENNS(以标准化的Laplacians为图移动器进行训练)将变得越来越稳定且可转移。实验结果说明了在Henn中考虑多个图表的重要性,并在需要转移性时显示出其出色的性能。

We introduce an architecture for processing signals supported on hypergraphs via graph neural networks (GNNs), which we call a Hyper-graph Expansion Neural Network (HENN), and provide the first bounds on the stability and transferability error of a hypergraph signal processing model. To do so, we provide a framework for bounding the stability and transferability error of GNNs across arbitrary graphs via spectral similarity. By bounding the difference between two graph shift operators (GSOs) in the positive semi-definite sense via their eigenvalue spectrum, we show that this error depends only on the properties of the GNN and the magnitude of spectral similarity of the GSOs. Moreover, we show that existing transferability results that assume the graphs are small perturbations of one another, or that the graphs are random and drawn from the same distribution or sampled from the same graphon can be recovered using our approach. Thus, both GNNs and our HENNs (trained using normalized Laplacians as graph shift operators) will be increasingly stable and transferable as the graphs become larger. Experimental results illustrate the importance of considering multiple graph representations in HENN, and show its superior performance when transferability is desired.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源