论文标题
图形 - 伯特:学习图表只需要注意
Graph-Bert: Only Attention is Needed for Learning Graph Representations
论文作者
论文摘要
主要的图形神经网络(GNN)在图形链接上过度汇总,已经看到了一些严重的性能问题,例如,暂停动画问题和过度光滑的问题。更重要的是,随着内存约束限制了整个节点的限制批处理,固有的相互连接的性质排除了图表中的并行化。在本文中,我们将介绍一个新的图形神经网络,即Graph-Bert(基于图形的BERT),仅基于注意机制而没有任何图形卷积或聚合操作员。我们建议在其本地上下文中使用采样的无连锁子图训练图形 - bert图。可以在独立模式下有效学习图形。同时,如果有任何有监督的标签信息或某些面向应用程序的目标,则可以直接将预训练的图形 - 伯特转移到其他应用程序任务中,也可以进行必要的微调。我们已经测试了Graph-Bert在几个图基准数据集上的有效性。基于预先训练的图形 - bert具有节点属性重建和结构恢复任务,我们专门针对节点分类和图形群集任务进行了进一步调整图形 - 伯特。实验结果表明,图形 - 伯特可以在学习效率和效率上均超过现有的GNN。
The dominant graph neural networks (GNNs) over-rely on the graph links, several serious performance problems with which have been witnessed already, e.g., suspended animation problem and over-smoothing problem. What's more, the inherently inter-connected nature precludes parallelization within the graph, which becomes critical for large-sized graph, as memory constraints limit batching across the nodes. In this paper, we will introduce a new graph neural network, namely GRAPH-BERT (Graph based BERT), solely based on the attention mechanism without any graph convolution or aggregation operators. Instead of feeding GRAPH-BERT with the complete large input graph, we propose to train GRAPH-BERT with sampled linkless subgraphs within their local contexts. GRAPH-BERT can be learned effectively in a standalone mode. Meanwhile, a pre-trained GRAPH-BERT can also be transferred to other application tasks directly or with necessary fine-tuning if any supervised label information or certain application oriented objective is available. We have tested the effectiveness of GRAPH-BERT on several graph benchmark datasets. Based the pre-trained GRAPH-BERT with the node attribute reconstruction and structure recovery tasks, we further fine-tune GRAPH-BERT on node classification and graph clustering tasks specifically. The experimental results have demonstrated that GRAPH-BERT can out-perform the existing GNNs in both the learning effectiveness and efficiency.