论文标题

关于标准化层对深神经网络差异私人培训的影响

On the effect of normalization layers on Differentially Private training of deep Neural networks

论文作者

Davody, Ali, Adelani, David Ifeoluwa, Kleinbauer, Thomas, Klakow, Dietrich

论文摘要

私有随机梯度下降(DPSGD)是基于差异隐私(DP)范式的随机梯度下降的变体,可以减轻培训数据中敏感信息的存在引起的隐私威胁。但是,使用DPSGD训练深层神经网络的一个主要缺点是模型的准确性降低。在本文中,我们研究了归一化层对DPSGD性能的影响。我们证明,归一化层显着影响具有嘈杂参数的深神经网络的效用,应被视为使用DPSGD培训的必要成分。特别是,我们提出了一种将批处理标准化与DPSGD整合的新方法,而不会产生额外的隐私损失。通过我们的方法,我们能够训练更深层次的网络并实现更好的公用事业权利权衡。

Differentially private stochastic gradient descent (DPSGD) is a variation of stochastic gradient descent based on the Differential Privacy (DP) paradigm, which can mitigate privacy threats that arise from the presence of sensitive information in training data. However, one major drawback of training deep neural networks with DPSGD is a reduction in the models accuracy. In this paper, we study the effect of normalization layers on the performance of DPSGD. We demonstrate that normalization layers significantly impact the utility of deep neural networks with noisy parameters and should be considered essential ingredients of training with DPSGD. In particular, we propose a novel method for integrating batch normalization with DPSGD without incurring an additional privacy loss. With our approach, we are able to train deeper networks and achieve a better utility-privacy trade-off.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源