论文标题

在联邦学习中揭示隐私与认证的鲁棒性之间的联系,以防止中毒攻击

Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks

论文作者

Xie, Chulin, Long, Yunhui, Chen, Pin-Yu, Li, Qinbin, Nourian, Arash, Koyejo, Sanmi, Li, Bo

论文摘要

联合学习(FL)提供了一个有效的范式,可以共同培训分布式用户的数据的全球模型。由于本地培训数据来自可能不值得信赖的不同用户,因此一些研究表明,FL容易受到中毒攻击的影响。同时,为了保护本地用户的隐私,FL通常以差异方式培训(DPFL)。因此,在本文中,我们问:差异隐私与抗毒攻击中的差异隐私与认证的鲁棒性之间的基本联系是什么?我们可以利用DPFL的先天隐私权为FL提供认证的鲁棒性吗?我们可以进一步改善FL的隐私以改善这种鲁棒性认证吗?我们首先研究FL的用户级和实例级别的隐私,并提供正式的隐私分析以获得改进的实例级隐私。然后,我们提供两个鲁棒性认证标准:DPFL对用户和实例级别的认证预测和认证攻击效率低下。从理论上讲,我们根据两个标准提供了DPFL认证的鲁棒性,并且鉴于数量有限的对抗用户或实例。从经验上讲,我们进行了广泛的实验,以在不同数据集的一系列中毒攻击下验证我们的理论。我们发现,增加DPFL的隐私保护水平会导致更强的认证攻击效率低下;但是,它不一定会导致更强的认证预测。因此,实现最佳认证预测需要在隐私和公用事业损失之间达到适当的平衡。

Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users. As local training data comes from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks. Meanwhile, to protect the privacy of local users, FL is usually trained in a differentially private way (DPFL). Thus, in this paper, we ask: What are the underlying connections between differential privacy and certified robustness in FL against poisoning attacks? Can we leverage the innate privacy property of DPFL to provide certified robustness for FL? Can we further improve the privacy of FL to improve such robustness certification? We first investigate both user-level and instance-level privacy of FL and provide formal privacy analysis to achieve improved instance-level privacy. We then provide two robustness certification criteria: certified prediction and certified attack inefficacy for DPFL on both user and instance levels. Theoretically, we provide the certified robustness of DPFL based on both criteria given a bounded number of adversarial users or instances. Empirically, we conduct extensive experiments to verify our theories under a range of poisoning attacks on different datasets. We find that increasing the level of privacy protection in DPFL results in stronger certified attack inefficacy; however, it does not necessarily lead to a stronger certified prediction. Thus, achieving the optimal certified prediction requires a proper balance between privacy and utility loss.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源