论文标题
图形神经网络中隐含的deNOSIT多么强大
How Powerful is Implicit Denoising in Graph Neural Networks
论文作者
论文摘要
图形神经网络(GNNS)由于其强大的表示能力而广泛用于图形结构的数据处理。通常认为,GNNS可以隐式消除非预测性的噪音。但是,对图神经网络中隐式降解作用的分析仍然开放。在这项工作中,我们进行了一项全面的理论研究,并分析了隐式denoising在GNN中发生的何时以及为什么发生。具体而言,我们研究噪声矩阵的收敛性。我们的理论分析表明,隐式降级在很大程度上取决于连通性,图形大小和GNN体系结构。此外,我们通过扩展图形信号降解问题,正式定义并提出了对抗图信号denoising(AGSD)问题。通过解决这样的问题,我们得出了强大的图形卷积,可以增强节点表示的平滑度和隐式降解效果。广泛的经验评估验证了我们的理论分析和我们提出的模型的有效性。
Graph Neural Networks (GNNs), which aggregate features from neighbors, are widely used for graph-structured data processing due to their powerful representation learning capabilities. It is generally believed that GNNs can implicitly remove the non-predictive noises. However, the analysis of implicit denoising effect in graph neural networks remains open. In this work, we conduct a comprehensive theoretical study and analyze when and why the implicit denoising happens in GNNs. Specifically, we study the convergence properties of noise matrix. Our theoretical analysis suggests that the implicit denoising largely depends on the connectivity, the graph size, and GNN architectures. Moreover, we formally define and propose the adversarial graph signal denoising (AGSD) problem by extending graph signal denoising problem. By solving such a problem, we derive a robust graph convolution, where the smoothness of the node representations and the implicit denoising effect can be enhanced. Extensive empirical evaluations verify our theoretical analyses and the effectiveness of our proposed model.