论文标题
过度参数化的神经网络作为不确定的线性系统
Over-parametrized neural networks as under-determined linear systems
论文作者
论文摘要
我们在简单的神经网络和不确定的线性系统之间建立联系,以全面探索神经网络研究中的几个有趣的理论问题。首先,我们强调表明,这种网络毫不奇怪,可以实现零训练损失。更具体地说,我们在单个隐藏层神经网络的宽度上提供下限,以便仅训练最后的线性层就足以达到零训练损失。与训练隐藏层重量的现有工作相比,我们的下限随数据集大小而增长的速度要慢。其次,我们表明,通常与Relu激活函数相关的内核具有基本缺陷 - 在一些简单的数据集中,不管如何选择或训练参数如何,都无法进行广泛研究的无偏见模型实现零训练损失。最后,我们对梯度下降的分析清楚地说明了某些矩阵的光谱特性如何影响早期迭代和长期训练行为。我们提出了新的激活函数,以避免Relu的陷阱,因为它们承认任何一组不同的数据点零训练损失解决方案,并且在实验上表现出有利的光谱特性。
We draw connections between simple neural networks and under-determined linear systems to comprehensively explore several interesting theoretical questions in the study of neural networks. First, we emphatically show that it is unsurprising such networks can achieve zero training loss. More specifically, we provide lower bounds on the width of a single hidden layer neural network such that only training the last linear layer suffices to reach zero training loss. Our lower bounds grow more slowly with data set size than existing work that trains the hidden layer weights. Second, we show that kernels typically associated with the ReLU activation function have fundamental flaws -- there are simple data sets where it is impossible for widely studied bias-free models to achieve zero training loss irrespective of how the parameters are chosen or trained. Lastly, our analysis of gradient descent clearly illustrates how spectral properties of certain matrices impact both the early iteration and long-term training behavior. We propose new activation functions that avoid the pitfalls of ReLU in that they admit zero training loss solutions for any set of distinct data points and experimentally exhibit favorable spectral properties.