论文标题
对均值和方差估计的当地差异隐私方案的细粒度中毒攻击
Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation
论文作者
论文摘要
尽管当地的差异隐私(LDP)保护单个用户的数据免受不受信任的数据策展人的推论,但最近的研究表明,攻击者可以从用户端发射数据中毒攻击,以将精心制作的虚假数据注入LDP协议,以最大程度地偏向数据策展人的最终估算。 在这项工作中,我们通过提出一种新的细粒攻击来进一步推进这一知识,这使攻击者可以微调并同时操纵平均值和差异估计,这是许多真实应用应用程序的流行分析任务。为了实现这一目标,攻击利用了LDP的特征将伪造数据注入本地LDP实例的输出域。我们将攻击称为输出中毒攻击(OPA)。我们观察到安全性私人的一致性,在这种一致性中,较小的隐私权损失增强了自然党的安全,这与先前工作中已知的安全私人关系权衡矛盾。我们进一步研究了一致性,并揭示了对新闻社会发作的数据中毒攻击的威胁格局的更全面看法。我们全面评估了对基线攻击的攻击,该攻击直觉向最不发达国家提供了错误的输入。实验结果表明,OPA在三个现实世界数据集上胜过基线。我们还提出了一种新颖的防御方法,该方法可以从污染的数据收集中恢复结果准确性,并洞悉安全的LDP设计。
Although local differential privacy (LDP) protects individual users' data from inference by an untrusted data curator, recent studies show that an attacker can launch a data poisoning attack from the user side to inject carefully-crafted bogus data into the LDP protocols in order to maximally skew the final estimate by the data curator. In this work, we further advance this knowledge by proposing a new fine-grained attack, which allows the attacker to fine-tune and simultaneously manipulate mean and variance estimations that are popular analytical tasks for many real-world applications. To accomplish this goal, the attack leverages the characteristics of LDP to inject fake data into the output domain of the local LDP instance. We call our attack the output poisoning attack (OPA). We observe a security-privacy consistency where a small privacy loss enhances the security of LDP, which contradicts the known security-privacy trade-off from prior work. We further study the consistency and reveal a more holistic view of the threat landscape of data poisoning attacks on LDP. We comprehensively evaluate our attack against a baseline attack that intuitively provides false input to LDP. The experimental results show that OPA outperforms the baseline on three real-world datasets. We also propose a novel defense method that can recover the result accuracy from polluted data collection and offer insight into the secure LDP design.