论文标题

IDLG:改善了梯度的深层泄漏

iDLG: Improved Deep Leakage from Gradients

论文作者

Zhao, Bo, Mopuri, Konda Reddy, Bilen, Hakan

论文摘要

人们普遍认为,共享梯度不会在分布式学习系统(例如协作学习和联合学习等)中泄露私人培训数据。最近,Zhu等人。提出了一种方法,该方法显示了从公开共享梯度中获取私人培训数据的可能性。在梯度(DLG)方法的深层泄漏中,它们将虚拟数据和相应标签综合为共享梯度的监督。但是,DLG始终如一地在收敛和发现地面标签方面困难。在本文中,我们发现共享梯度肯定会泄漏地面真相标签。我们提出了一种简单但可靠的方法,以从梯度中提取准确的数据。特别是,我们的方法当然可以提取地面标签而不是DLG,因此我们将其提高了改进的DLG(IDLG)。我们的方法对于在一速标签上训练有跨透镜损失的任何可区分模型都是有效的。我们在数学上说明了我们的方法如何从梯度中提取地面标签,并在经验上证明了比DLG的优势。

It is widely believed that sharing gradients will not leak private training data in distributed learning systems such as Collaborative Learning and Federated Learning, etc. Recently, Zhu et al. presented an approach which shows the possibility to obtain private training data from the publicly shared gradients. In their Deep Leakage from Gradient (DLG) method, they synthesize the dummy data and corresponding labels with the supervision of shared gradients. However, DLG has difficulty in convergence and discovering the ground-truth labels consistently. In this paper, we find that sharing gradients definitely leaks the ground-truth labels. We propose a simple but reliable approach to extract accurate data from the gradients. Particularly, our approach can certainly extract the ground-truth labels as opposed to DLG, hence we name it Improved DLG (iDLG). Our approach is valid for any differentiable model trained with cross-entropy loss over one-hot labels. We mathematically illustrate how our method can extract ground-truth labels from the gradients and empirically demonstrate the advantages over DLG.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源