论文标题

对抗环境中的联合学习

Federated Learning in Adversarial Settings

论文作者

Kerkouche, Raouf, Ács, Gergely, Castelluccia, Claude

论文摘要

联合学习使实体能够协作学习共享的预测模型,同时保留其本地培训数据。它可以防止数据收集和聚合,因此减轻了相关的隐私风险。但是,它仍然容易受到各种安全攻击的攻击,恶意参与者旨在降低生成的模型,插入后门或推断其他参与者的培训数据。本文提出了一种新的联合学习计划,该计划在鲁棒性,隐私,带宽效率和模型准确性之间提供了不同的权衡。我们的方案使用模型更新的偏差量化,因此是带宽有效的。即使很大一部分参与者的节点是恶意的,它也对最新的后门以及模型退化攻击也很强大。我们建议对该计划进行实际差异性扩展,以保护参与实体的整个数据集。我们表明,即使有严格的隐私要求,这种扩展的性能与非私有性但健壮的方案一样有效,但对于模型退化和后门攻击也不太强大。这表明差异隐私和鲁棒性之间可能取决于基本的权衡。

Federated Learning enables entities to collaboratively learn a shared prediction model while keeping their training data locally. It prevents data collection and aggregation and, therefore, mitigates the associated privacy risks. However, it still remains vulnerable to various security attacks where malicious participants aim at degrading the generated model, inserting backdoors, or inferring other participants' training data. This paper presents a new federated learning scheme that provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy. Our scheme uses biased quantization of model updates and hence is bandwidth efficient. It is also robust against state-of-the-art backdoor as well as model degradation attacks even when a large proportion of the participant nodes are malicious. We propose a practical differentially private extension of this scheme which protects the whole dataset of participating entities. We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements but are less robust against model degradation and backdoor attacks. This suggests a possible fundamental trade-off between Differential Privacy and robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源