论文标题

基于K核的时间图卷积网络用于动态图

K-Core based Temporal Graph Convolutional Network for Dynamic Graphs

论文作者

Liu, Jingxin, Xu, Chang, Yin, Chang, Wu, Weiqiang, Song, You

论文摘要

图表示学习是各种应用程序中的一项基本任务,该任务致力于学习可以保留图形拓扑信息的节点的低维嵌入。但是,许多现有方法集中在静态图上,同时忽略不断发展的图形模式。受到图形卷积网络(GCN)在静态图嵌入中的成功的启发,我们提出了一个基于K核的新型时间图卷积网络CTGCN,以学习动态图的节点表示。与以前的动态图嵌入方法相反,CTGCN可以同时保留局部结缔组织和全局结构相似性,同时同时捕获图形动力学。在拟议的框架中,传统的图形卷积被概括为两个阶段,即特征转换和特征聚合,这使CTGCN更具灵活性,并使CTGCN能够在同一框架下学习连接和结构信息。在7个现实世界图上的实验结果表明,CTGCN在几个任务中的现有最新图形嵌入方法都超过了几个任务,包括链接预测和结构性角色分类。这项工作的源代码可以从\ url {https://github.com/jhljx/ctgcn}获得。

Graph representation learning is a fundamental task in various applications that strives to learn low-dimensional embeddings for nodes that can preserve graph topology information. However, many existing methods focus on static graphs while ignoring evolving graph patterns. Inspired by the success of graph convolutional networks(GCNs) in static graph embedding, we propose a novel k-core based temporal graph convolutional network, the CTGCN, to learn node representations for dynamic graphs. In contrast to previous dynamic graph embedding methods, CTGCN can preserve both local connective proximity and global structural similarity while simultaneously capturing graph dynamics. In the proposed framework, the traditional graph convolution is generalized into two phases, feature transformation and feature aggregation, which gives the CTGCN more flexibility and enables the CTGCN to learn connective and structural information under the same framework. Experimental results on 7 real-world graphs demonstrate that the CTGCN outperforms existing state-of-the-art graph embedding methods in several tasks, including link prediction and structural role classification. The source code of this work can be obtained from \url{https://github.com/jhljx/CTGCN}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源