论文标题
通过合作均匀的增强抵抗图形对抗攻击
Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation
论文作者
论文摘要
最近的研究表明,图形神经网络(GNNS)是脆弱的,很容易受到小扰动的愚弄,这引起了对各种安全至关重要应用中的GNN的相当大的关注。在这项工作中,我们专注于新兴但批判性攻击,即图形注入攻击(GIA),其中对手通过注入假节点而不是修改现有结构或节点属性来毒化图。受到对抗性攻击的启发,与扰动图上的异质性增加有关(对手倾向于连接不同的节点),我们提出了通过图数据和模型的合作均高粒子增强对GIA的一般防御框架Chagnn对GIA的chagnn。具体而言,该模型在每轮训练中生成用于未标记节点的伪标记,以减少具有不同标签的节点的异质边缘。清洁图可以回到模型中,从而产生更有用的伪标签。然后以这种迭代方式,有希望增强模型鲁棒性。我们介绍了对同质增强作用的理论分析,并提供了提案有效性的保证。实验结果从经验上证明了Chagnn与最新的现实世界数据集的最新防御方法相比。
Recent studies show that Graph Neural Networks(GNNs) are vulnerable and easily fooled by small perturbations, which has raised considerable concerns for adapting GNNs in various safety-critical applications. In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack(GIA), in which the adversary poisons the graph by injecting fake nodes instead of modifying existing structures or node attributes. Inspired by findings that the adversarial attacks are related to the increased heterophily on perturbed graphs (the adversary tends to connect dissimilar nodes), we propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model. Specifically, the model generates pseudo-labels for unlabeled nodes in each round of training to reduce heterophilous edges of nodes with distinct labels. The cleaner graph is fed back to the model, producing more informative pseudo-labels. In such an iterative manner, model robustness is then promisingly enhanced. We present the theoretical analysis of the effect of homophilous augmentation and provide the guarantee of the proposal's validity. Experimental results empirically demonstrate the effectiveness of CHAGNN in comparison with recent state-of-the-art defense methods on diverse real-world datasets.