论文标题

数据异质性差异隐私:从理论到算法

Data Heterogeneity Differential Privacy: From Theory to Algorithm

论文作者

Kang, Yilin, Li, Jian, Liu, Yong, Wang, Weiping

论文摘要

传统上,当在差异隐私(DP)领域中使用不同的数据实例训练时,随机噪声被同样注射。在本文中,我们首先给出DP随机梯度下降(SGD)方法的较尖锐的多余风险范围。考虑到大多数以前的方法在凸条件下,我们使用polyak-lojasiewicz条件在本文中放松它。然后,在观察到不同的培训数据实例在不同程度上影响机器学习模型之后,我们考虑了培训数据的异质性,并试图从新的角度提高DP-SGD的性能。具体而言,通过引入影响函数(如果),我们对最终机器学习模型的各种培训数据的贡献进行了定量测量。如果单个数据实例所做的贡献很少,以至于攻击者无法从模型中推断出任何内容,则在对其进行训练时不会添加噪音。基于此观察结果,我们设计了一种“性能改进” DP-SGD算法:PIDP-SGD。理论和实验结果表明,我们提出的PIDP-SGD可显着提高性能。

Traditionally, the random noise is equally injected when training with different data instances in the field of differential privacy (DP). In this paper, we first give sharper excess risk bounds of DP stochastic gradient descent (SGD) method. Considering most of the previous methods are under convex conditions, we use Polyak-Łojasiewicz condition to relax it in this paper. Then, after observing that different training data instances affect the machine learning model to different extent, we consider the heterogeneity of training data and attempt to improve the performance of DP-SGD from a new perspective. Specifically, by introducing the influence function (IF), we quantitatively measure the contributions of various training data on the final machine learning model. If the contribution made by a single data instance is so little that attackers cannot infer anything from the model, we do not add noise when training with it. Based on this observation, we design a `Performance Improving' DP-SGD algorithm: PIDP-SGD. Theoretical and experimental results show that our proposed PIDP-SGD improves the performance significantly.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源