论文标题
图神经网络中差异隐私的异质随机响应
Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks
论文作者
论文摘要
图形神经网络(GNN)容易受到隐私推理攻击(PIAS)的影响,因为它们能够从图形数据中节点中的特征和边缘学习联合表示。为了防止GNN中的隐私泄漏,我们提出了一种新型的异质随机响应(异位)机制,以保护节点的特征和边缘在差异隐私(DP)保证下保证PIAS,而无需在训练GNN中的数据和模型实用性不足。我们的想法是平衡节点特征和边缘重新分配隐私预算的重要性和敏感性,因为某些功能和边缘对模型实用程序比其他功能更敏感或重要。结果,我们在两个节点的特征级别和边缘脱离现有方法的边缘上得出了更好的随机化概率和更严格的误差界限,从而使我们能够维持培训GNN的高数据实用程序。使用基准数据集进行的广泛的理论和经验分析表明,在对两个节点的特征和边缘的严格隐私保护下,在严格的隐私保护下,差异在模型实用程序方面显着优于各种基准。这使我们能够有效地捍卫PIAS的PIA。
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data. To prevent privacy leakages in GNNs, we propose a novel heterogeneous randomized response (HeteroRR) mechanism to protect nodes' features and edges against PIAs under differential privacy (DP) guarantees without an undue cost of data and model utility in training GNNs. Our idea is to balance the importance and sensitivity of nodes' features and edges in redistributing the privacy budgets since some features and edges are more sensitive or important to the model utility than others. As a result, we derive significantly better randomization probabilities and tighter error bounds at both levels of nodes' features and edges departing from existing approaches, thus enabling us to maintain high data utility for training GNNs. An extensive theoretical and empirical analysis using benchmark datasets shows that HeteroRR significantly outperforms various baselines in terms of model utility under rigorous privacy protection for both nodes' features and edges. That enables us to defend PIAs in DP-preserving GNNs effectively.