论文标题

Gromov-Wasserstein差异,具有分布式结构图的局部差分隐私

Gromov-Wasserstein Discrepancy with Local Differential Privacy for Distributed Structural Graphs

论文作者

Jin, Hongwei, Chen, Xun

论文摘要

学习结构化数据(尤其是图形)之间的相似性是基本问题之一。除了图形内核之外,Gromov-Wasserstein(GW)距离最近由于其灵活地捕获拓扑和特征特征以及处理置换不变性而引起了很大的关注。但是,对于不同的数据挖掘和机器学习应用程序,结构化数据被广泛分布。出于隐私问题,访问分散数据仅限于单个客户或其他孤岛。为了解决这些问题,我们提出了一个隐私保护框架,以分析从联合风味中从图形神经网络中学到的节点嵌入的GW差异,然后明确地将基于多位编码器的当地微分隐私(LDP)放置,以保护敏感信息。我们的实验表明,通过$ \ varepsilon $ -LDP算法确保了强大的隐私保护措施,所提出的框架不仅保留了图形学习中的隐私,而且还在GW距离下呈现了液体结构指标,从而在分类和集群任务中提供了可比的性能甚至更好的性能。此外,我们通过分析和经验上基于自由贸易的GW距离背后的基本原理。

Learning the similarity between structured data, especially the graphs, is one of the essential problems. Besides the approach like graph kernels, Gromov-Wasserstein (GW) distance recently draws big attention due to its flexibility to capture both topological and feature characteristics, as well as handling the permutation invariance. However, structured data are widely distributed for different data mining and machine learning applications. With privacy concerns, accessing the decentralized data is limited to either individual clients or different silos. To tackle these issues, we propose a privacy-preserving framework to analyze the GW discrepancy of node embedding learned locally from graph neural networks in a federated flavor, and then explicitly place local differential privacy (LDP) based on Multi-bit Encoder to protect sensitive information. Our experiments show that, with strong privacy protections guaranteed by the $\varepsilon$-LDP algorithm, the proposed framework not only preserves privacy in graph learning but also presents a noised structural metric under GW distance, resulting in comparable and even better performance in classification and clustering tasks. Moreover, we reason the rationale behind the LDP-based GW distance analytically and empirically.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源