论文标题

差异私人学习具有保证金保证

Differentially Private Learning with Margin Guarantees

论文作者

Bassily, Raef, Mohri, Mehryar, Suresh, Ananda Theertha

论文摘要

我们提出了一系列具有独立于维度保证的新差异私有(DP)算法。对于线性假设的家族,我们提供了一种纯粹的DP学习算法,该算法受益于相对偏差保证,以及具有保证金保证的有效DP学习算法。我们还提出了一种新的有效的DP学习算法,该算法具有带有偏移不变核(例如高斯内核)的基于内核的假设的保证金保证,并指出如何使用遗忘的素描技术将我们的结果扩展到其他内核。我们进一步为一个馈送前向神经网络家族提供了纯粹的DP学习算法,我们证明了与输入维度无关的保证金保证。此外,我们描述了一种一般标签DP学习算法,该算法受益于相对偏差边界的界限,并且适用于广泛的假设集(包括神经网络)。最后,我们展示了如何以一般的方式增强DP学习算法以包括模型选择,以选择最佳置信度余量参数。

We present a series of new differentially private (DP) algorithms with dimension-independent margin guarantees. For the family of linear hypotheses, we give a pure DP learning algorithm that benefits from relative deviation margin guarantees, as well as an efficient DP learning algorithm with margin guarantees. We also present a new efficient DP learning algorithm with margin guarantees for kernel-based hypotheses with shift-invariant kernels, such as Gaussian kernels, and point out how our results can be extended to other kernels using oblivious sketching techniques. We further give a pure DP learning algorithm for a family of feed-forward neural networks for which we prove margin guarantees that are independent of the input dimension. Additionally, we describe a general label DP learning algorithm, which benefits from relative deviation margin bounds and is applicable to a broad family of hypothesis sets, including that of neural networks. Finally, we show how our DP learning algorithms can be augmented in a general way to include model selection, to select the best confidence margin parameter.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源