论文标题
用于评估联邦学习中梯度泄漏攻击的框架
A Framework for Evaluating Gradient Leakage Attacks in Federated Learning
论文作者
论文摘要
联合学习(FL)是一个新兴的分布式机器学习框架,用于与客户网络(边缘设备)网络进行协作模型培训。 FL通过允许客户将其敏感数据保留在本地设备上,并仅与联合服务器共享本地培训参数更新,从而提供默认客户端隐私。但是,最近的研究表明,即使共享从客户端到联合服务器的本地参数更新也可能容易受到梯度泄漏攻击的影响,并就其培训数据侵犯了客户隐私。在本文中,我们提出了一个原则上的框架,用于评估和比较不同形式的客户隐私泄漏攻击。我们首先提供正式和实验分析,以显示对手如何通过简单地分析本地培训(例如本地梯度或权重更新向量)的共享参数更新来重建私人本地培训数据。然后,我们分析联合学习中不同的超参数配置以及攻击算法的不同设置如何影响攻击效率和攻击成本。我们的框架还可以在使用沟通有效的FL协议时测量,评估和分析客户隐私泄漏攻击在不同梯度压缩比下的有效性。我们的实验还包括一些初步缓解策略,以强调提供系统的攻击评估框架的重要性,以深入了解各种形式的客户隐私泄漏威胁在联邦学习和开发攻击缓解理论基础中。
Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients (edge devices). FL offers default client privacy by allowing clients to keep their sensitive data on local devices and to only share local training parameter updates with the federated server. However, recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks and intrude the client privacy regarding its training data. In this paper, we present a principled framework for evaluating and comparing different forms of client privacy leakage attacks. We first provide formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training (e.g., local gradient or weight update vector). We then analyze how different hyperparameter configurations in federated learning and different settings of the attack algorithm may impact on both attack effectiveness and attack cost. Our framework also measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols. Our experiments also include some preliminary mitigation strategies to highlight the importance of providing a systematic attack evaluation framework towards an in-depth understanding of the various forms of client privacy leakage threats in federated learning and developing theoretical foundations for attack mitigation.