论文标题

差异隐私的三种变体:无损转换和应用

Three Variants of Differential Privacy: Lossless Conversion and Applications

论文作者

Asoodeh, Shahab, Liao, Jiachun, Calmon, Flavio P., Kosut, Oliver, Sankar, Lalitha

论文摘要

我们考虑了三种不同的差异隐私(DP),即近似DP,RényiDP(RDP)和假设测试DP。在第一部分中,我们根据两个$ f $ diverences的联合范围与大约DP和RDP的关节范围开发了一种将大约DP与RDP相关联的机械。特别是,这使我们能够得出满足给定RDP水平的机制的最佳近似DP参数。作为应用程序,我们将结果应用于会计师框架,以表征嘈杂的随机梯度下降(SGD)的隐私保证。与最先进的情况相比,我们的界限可能会导致大约100个随机梯度下降迭代,以培训相同的隐私预算深度学习模型。在第二部分中,我们建立了RDP和假设测试DP之间的关系,这使我们能够将RDP约束转换为I型和II型误差概率之间的权衡。然后,我们证明,对于嘈杂的SGD,与最近提议的$ f $ -DP框架相比,我们的结果可提供更严格的隐私保证。

We consider three different variants of differential privacy (DP), namely approximate DP, Rényi DP (RDP), and hypothesis test DP. In the first part, we develop a machinery for optimally relating approximate DP to RDP based on the joint range of two $f$-divergences that underlie the approximate DP and RDP. In particular, this enables us to derive the optimal approximate DP parameters of a mechanism that satisfies a given level of RDP. As an application, we apply our result to the moments accountant framework for characterizing privacy guarantees of noisy stochastic gradient descent (SGD). When compared to the state-of-the-art, our bounds may lead to about 100 more stochastic gradient descent iterations for training deep learning models for the same privacy budget. In the second part, we establish a relationship between RDP and hypothesis test DP which allows us to translate the RDP constraint into a tradeoff between type I and type II error probabilities of a certain binary hypothesis test. We then demonstrate that for noisy SGD our result leads to tighter privacy guarantees compared to the recently proposed $f$-DP framework for some range of parameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源