论文标题

对浅神经网络和相关渐近扩张的归一化影响

Normalization effects on shallow neural networks and related asymptotic expansions

论文作者

Yu, Jiahui, Spiliopoulos, Konstantinos

论文摘要

我们考虑浅(单个隐藏层)神经网络,并在接受随机梯度下降训练时表征其性能,因为隐藏单元的数量$ n $,梯度下降步骤成长为无穷大。特别是,我们研究了导致神经网络的不同标度方案对网络统计输出的不同正常化的影响,从而缩小了$ 1/\ sqrt {n} $与平均范围$ 1/N $归一化之间的差距。随着隐藏单元数量增长到无穷大,我们会为缩放参数点上的统计输出而为神经网络的统计输出开发渐近扩展。基于这一扩展,我们从数学上证明,要以$ n $为单位的领先顺序,没有偏差变化的权衡,因为随着隐藏单位的数量的增加和时间的增长,偏差和差异(都明确表征)都会减少。此外,我们表明,对于$ n $的领先顺序,随着缩放参数的暗示归一化方法,神经网络的统计输出的差异衰减。对MNIST和CIFAR10数据集的数值研究表明,随着神经网络的归一化越来越接近平均场归一化,测试和训练的准确性会单调提高。

We consider shallow (single hidden layer) neural networks and characterize their performance when trained with stochastic gradient descent as the number of hidden units $N$ and gradient descent steps grow to infinity. In particular, we investigate the effect of different scaling schemes, which lead to different normalizations of the neural network, on the network's statistical output, closing the gap between the $1/\sqrt{N}$ and the mean-field $1/N$ normalization. We develop an asymptotic expansion for the neural network's statistical output pointwise with respect to the scaling parameter as the number of hidden units grows to infinity. Based on this expansion, we demonstrate mathematically that to leading order in $N$, there is no bias-variance trade off, in that both bias and variance (both explicitly characterized) decrease as the number of hidden units increases and time grows. In addition, we show that to leading order in $N$, the variance of the neural network's statistical output decays as the implied normalization by the scaling parameter approaches the mean field normalization. Numerical studies on the MNIST and CIFAR10 datasets show that test and train accuracy monotonically improve as the neural network's normalization gets closer to the mean field normalization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源