论文标题
部分可观测时空混沌系统的无模型预测
Kernel Normalized Convolutional Networks for Privacy-Preserving Machine Learning
论文作者
论文摘要
在与隐私相关的应用领域(例如联合学习(FL),差异隐私(DP)和差异化私人联盟学习(DP-FL))中,归一化是一个重要但正在研究的挑战。虽然已经显示了这些域的批准归一化的不合适性,但其他归一化方法对联邦或差异私有模型的性能的影响尚不清楚。为了解决这个问题,我们在FL,DP和DP-FL设置中的层归一化(分层),组归一化(GroupNorm)以及最近提出的内核归一化(kernelnorm)之间进行了性能比较。我们的结果表明,与基线(即没有归一化)相比,对于FL和DP中的浅层模型,分层和群norm没有提供性能增长。另一方面,它们大大提高了DP-FL中浅层模型的性能,并且在FL和DP中更深的模型。此外,在所有被考虑的学习环境中,kernelnorm在准确性和融合率(或沟通效率)方面显着优于其竞争对手。鉴于这些主要观察结果,我们提出了一个用于私人学习的内核标准化的重新连接架构,称为KnResnet-13。使用所提出的体系结构,我们在从头开始训练时,在CIFAR-10和Imagenette数据集上提供新的最新精度值。
Normalization is an important but understudied challenge in privacy-related application domains such as federated learning (FL), differential privacy (DP), and differentially private federated learning (DP-FL). While the unsuitability of batch normalization for these domains has already been shown, the impact of other normalization methods on the performance of federated or differentially private models is not well-known. To address this, we draw a performance comparison among layer normalization (LayerNorm), group normalization (GroupNorm), and the recently proposed kernel normalization (KernelNorm) in FL, DP, and DP-FL settings. Our results indicate LayerNorm and GroupNorm provide no performance gain compared to the baseline (i.e. no normalization) for shallow models in FL and DP. They, on the other hand, considerably enhance the performance of shallow models in DP-FL and deeper models in FL and DP. KernelNorm, moreover, significantly outperforms its competitors in terms of accuracy and convergence rate (or communication efficiency) for both shallow and deeper models in all considered learning environments. Given these key observations, we propose a kernel normalized ResNet architecture called KNResNet-13 for differentially private learning. Using the proposed architecture, we provide new state-of-the-art accuracy values on the CIFAR-10 and Imagenette datasets, when trained from scratch.