论文标题
内核标准化卷积网络
Kernel Normalized Convolutional Networks
论文作者
论文摘要
现有的卷积神经网络体系结构经常依赖批处理(批处理)来有效训练模型。但是,BatchNorm的性能很小,小批量尺寸很小,并且不适合差异隐私。为了解决这些局限性,我们建议内核归一化(内核)和内核使卷积层标准化,并将其纳入内核归一化卷积网络(KNCONVNETS)作为主构建块。我们在放弃批处理层的同时,实现了与最先进的重新NET相对应的KnConvnets。通过广泛的实验,我们说明Knconvnets与图像分类和语义分割中的BatchNorm对应物相比,具有更高或竞争性的性能。他们还大大优于与批处理无关的竞争对手,包括基于层和群体归一化的竞争者,在非私人和差异私人培训中。鉴于这一点,内核将层和组归一化的批处理独立特性与batchnorm的性能优势相结合。
Existing convolutional neural network architectures frequently rely upon batch normalization (BatchNorm) to effectively train the model. BatchNorm, however, performs poorly with small batch sizes, and is inapplicable to differential privacy. To address these limitations, we propose the kernel normalization (KernelNorm) and kernel normalized convolutional layers, and incorporate them into kernel normalized convolutional networks (KNConvNets) as the main building blocks. We implement KNConvNets corresponding to the state-of-the-art ResNets while forgoing the BatchNorm layers. Through extensive experiments, we illustrate that KNConvNets achieve higher or competitive performance compared to the BatchNorm counterparts in image classification and semantic segmentation. They also significantly outperform their batch-independent competitors including those based on layer and group normalization in non-private and differentially private training. Given that, KernelNorm combines the batch-independence property of layer and group normalization with the performance advantage of BatchNorm.