论文标题

最佳的会员推理范围,用于采样高斯机制的自适应组成

Optimal Membership Inference Bounds for Adaptive Composition of Sampled Gaussian Mechanisms

论文作者

Mahloujifar, Saeed, Sablayrolles, Alexandre, Cormode, Graham, Jha, Somesh

论文摘要

给定训练的模型和数据样本,成员资格 - 推荐(MI)攻击预测样本是否在模型的训练集中。对MI攻击的共同对策是在模型培训期间利用差异隐私(DP)来掩盖单个示例的存在。虽然DP的这种使用是一种限制MI攻击功效的原则方法,但DP提供的界限与MI攻击的经验表现之间存在差距。在本文中,我们为对手的\ textit {Advantage}得出了安装MI攻击的界限,并证明了广泛使用的高斯机制的紧密度。我们进一步显示了MI攻击的\ textit {presuper}的界限。我们的边界比通过DP分析获得的界限要强得多。例如,基于我们的分析,用$ε= 4 $分析DP-SGD的设置将获得$ \ \ \ \ \ \ \ \ $ \ 0.97 $的优势,而使用$ε$的分析将$ε$转换为会员推荐范围。 最后,使用我们的分析,我们为在CIFAR10数据集培训的模型提供MI指标。据我们所知,我们的分析为隐私提供了最新的会员推理范围。

Given a trained model and a data sample, membership-inference (MI) attacks predict whether the sample was in the model's training set. A common countermeasure against MI attacks is to utilize differential privacy (DP) during model training to mask the presence of individual examples. While this use of DP is a principled approach to limit the efficacy of MI attacks, there is a gap between the bounds provided by DP and the empirical performance of MI attacks. In this paper, we derive bounds for the \textit{advantage} of an adversary mounting a MI attack, and demonstrate tightness for the widely-used Gaussian mechanism. We further show bounds on the \textit{confidence} of MI attacks. Our bounds are much stronger than those obtained by DP analysis. For example, analyzing a setting of DP-SGD with $ε=4$ would obtain an upper bound on the advantage of $\approx0.36$ based on our analyses, while getting bound of $\approx 0.97$ using the analysis of previous work that convert $ε$ to membership inference bounds. Finally, using our analysis, we provide MI metrics for models trained on CIFAR10 dataset. To the best of our knowledge, our analysis provides the state-of-the-art membership inference bounds for the privacy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源