论文标题

通过联合学习中的对手实例来衡量当地差异隐私的下限

Measuring Lower Bounds of Local Differential Privacy via Adversary Instantiations in Federated Learning

论文作者

Matsumoto, Marin, Takahashi, Tsubasa, Liew, Seng Pei, Oguchi, Masato

论文摘要

当地的差异隐私(LDP)提供了强大的隐私保证,可以在联合学习(FL)等分布式设置中使用。 FL中的LDP机制通过在客户端将其随机化来保护客户的梯度;但是,我们如何解释随机化给出的隐私级别?此外,我们可以在实践中减轻哪些类型的攻击?为了回答这些问题,我们通过测量自然党的下限来引入经验隐私测试。隐私测试估计对手如何预测报告的随机梯度是用原始梯度$ g_1 $还是$ g_2 $制成的。然后,我们在LDP下实例化佛罗里达州的六个对手,以测量各种攻击表面的经验性LDP,包括达到LDP理论上上限的最坏情况攻击。具有对手实例的经验隐私测试使我们能够更直观地解释自由党,并讨论隐私参数的放松,直到特定实例化的攻击表面为止。我们还证明了这些对抗性环境中测量隐私的数值观察,而最坏的案例攻击在FL中并不现实。最后,我们还讨论了不发达国家在佛罗里达州的隐私水平的放松。

Local differential privacy (LDP) gives a strong privacy guarantee to be used in a distributed setting like federated learning (FL). LDP mechanisms in FL protect a client's gradient by randomizing it on the client; however, how can we interpret the privacy level given by the randomization? Moreover, what types of attacks can we mitigate in practice? To answer these questions, we introduce an empirical privacy test by measuring the lower bounds of LDP. The privacy test estimates how an adversary predicts if a reported randomized gradient was crafted from a raw gradient $g_1$ or $g_2$. We then instantiate six adversaries in FL under LDP to measure empirical LDP at various attack surfaces, including a worst-case attack that reaches the theoretical upper bound of LDP. The empirical privacy test with the adversary instantiations enables us to interpret LDP more intuitively and discuss relaxation of the privacy parameter until a particular instantiated attack surfaces. We also demonstrate numerical observations of the measured privacy in these adversarial settings, and the worst-case attack is not realistic in FL. In the end, we also discuss the possible relaxation of privacy levels in FL under LDP.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源