论文标题

自动稳定:稳定边缘的梯度下降的隐式偏差

Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability

论文作者

Damian, Alex, Nichani, Eshaan, Lee, Jason D.

论文摘要

传统对梯度下降的分析表明,当Hessian最大的特征值(也称为Sharkness $ s(θ)$)被$ 2/η$界定时,培训是“稳定的”,训练损失会单调减少。然而,最近的工作观察到,当训练具有全批量或大批次梯度下降的现代神经网络时,这种假设并不存在。最近,Cohen等。 (2021)观察到了两个重要现象。首先,被称为渐进式锐化的是,清晰度在整个训练过程中稳步提高,直到达到不稳定的截止$ 2/η$。第二个被称为稳定的边缘的是,其余训练的清晰度徘徊在$ 2/η$,而损失持续下降,尽管非单调性。我们证明,毫无疑问,稳定边缘的梯度下降的动态可以通过立方泰勒的扩张来捕获:随着迭代的触发性,由于不稳定的高层特征向量的方向,由于本地泰勒的损失函数的弯曲功能导致曲率降低,直到稳定性恢复了稳定性。我们称之为自动化的属性是梯度下降的一般特性,并在稳定边缘解释了其行为。自动稳定的关键结果是,稳定性边缘的梯度下降隐含地遵循约束$ s(θ)\ le 2/η$下的投影梯度下降(PGD)。我们的分析为整个训练中的PGD轨迹的损失,清晰度和偏差提供了精确的预测,我们在许多标准环境和理论上在轻度条件下进行了经验验证。我们的分析发现了梯度血统对稳定性的隐性偏见的机制。

Traditional analyses of gradient descent show that when the largest eigenvalue of the Hessian, also known as the sharpness $S(θ)$, is bounded by $2/η$, training is "stable" and the training loss decreases monotonically. Recent works, however, have observed that this assumption does not hold when training modern neural networks with full batch or large batch gradient descent. Most recently, Cohen et al. (2021) observed two important phenomena. The first, dubbed progressive sharpening, is that the sharpness steadily increases throughout training until it reaches the instability cutoff $2/η$. The second, dubbed edge of stability, is that the sharpness hovers at $2/η$ for the remainder of training while the loss continues decreasing, albeit non-monotonically. We demonstrate that, far from being chaotic, the dynamics of gradient descent at the edge of stability can be captured by a cubic Taylor expansion: as the iterates diverge in direction of the top eigenvector of the Hessian due to instability, the cubic term in the local Taylor expansion of the loss function causes the curvature to decrease until stability is restored. This property, which we call self-stabilization, is a general property of gradient descent and explains its behavior at the edge of stability. A key consequence of self-stabilization is that gradient descent at the edge of stability implicitly follows projected gradient descent (PGD) under the constraint $S(θ) \le 2/η$. Our analysis provides precise predictions for the loss, sharpness, and deviation from the PGD trajectory throughout training, which we verify both empirically in a number of standard settings and theoretically under mild conditions. Our analysis uncovers the mechanism for gradient descent's implicit bias towards stability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源