论文标题
图形表示通过图形相互信息最大化
Graph Representation Learning via Graphical Mutual Information Maximization
论文作者
论文摘要
社交网络和通信网络等各种信息网络内容的丰富性为学习高质量表达表示的前所未有的潜力而没有外部监督。本文研究了如何以无监督的方式将丰富的信息保存和提取到嵌入空间中。为此,我们提出了一个新颖的概念图形互信息(GMI),以测量输入图和高级隐藏表示之间的相关性。 GMI从向量空间到图形域将常规相互信息计算的概念泛化,其中从节点特征的两个方面和拓扑结构的两个方面测量相互信息是必不可少的。 GMI表现出几个好处:首先,它与输入图的同构转换不变---在许多现有的图表表示学习算法中不可避免的约束;此外,可以通过当前的互信息估计方法(例如我的)有效地估计和最大化。最后,我们的理论分析证实了其正确性和合理性。借助GMI,我们开发了一种无监督的学习模型,该模型通过在图神经编码器的输入和输出之间最大化GMI训练。关于换电和归纳节点分类和链接预测的大量实验表明,我们的方法的表现优于最先进的无监督对应物,甚至有时甚至超过了监督者的性能。
The richness in the content of various information networks such as social networks and communication networks provides the unprecedented potential for learning high-quality expressive representations without external supervision. This paper investigates how to preserve and extract the abundant information from graph-structured data into embedding space in an unsupervised manner. To this end, we propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations. GMI generalizes the idea of conventional mutual information computations from vector space to the graph domain where measuring mutual information from two aspects of node features and topological structure is indispensable. GMI exhibits several benefits: First, it is invariant to the isomorphic transformation of input graphs---an inevitable constraint in many existing graph representation learning algorithms; Besides, it can be efficiently estimated and maximized by current mutual information estimation methods such as MINE; Finally, our theoretical analysis confirms its correctness and rationality. With the aid of GMI, we develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder. Considerable experiments on transductive as well as inductive node classification and link prediction demonstrate that our method outperforms state-of-the-art unsupervised counterparts, and even sometimes exceeds the performance of supervised ones.