论文标题

MAXVA:通过最大化观察到的梯度方差来快速适应步骤大小

MaxVA: Fast Adaptation of Step Sizes by Maximizing Observed Variance of Gradients

论文作者

Zhu, Chen, Cheng, Yu, Gan, Zhe, Huang, Furong, Liu, Jingjing, Goldstein, Tom

论文摘要

RMSPROP和ADAM等自适应梯度方法使用平方梯度的指数移动估计来计算自适应阶跃尺寸,从而在面对嘈杂的目标时获得了比SGD更好的收敛性。但是,由于不稳定或极端的适应性学习率,亚当可能具有不良的收敛行为。已经提出了诸如Amsgrad和Adabound之类的方法来稳定训练后期亚当的适应性学习率,但在诸如训练变压器\ cite {Transformer {Transformer}之类的某些实际任务中,它们并没有表现优于Adam。在本文中,我们提出了一个自适应学习率原理,其中亚当中平方梯度的运行平均值被加权平均值所取代,并选择权重以最大化每个坐标的估计方差。这导致对局部梯度方差的适应性更快,这比亚当更具理想的经验收敛行为。我们证明,在温和的假设下,提出的算法会收敛于非凸的随机优化问题,并证明了我们自适应平均方法在机器翻译,自然语言理解和BERT大量预处理上提高了疗效。该代码可在https://github.com/zhuchen03/maxva上找到。

Adaptive gradient methods such as RMSProp and Adam use exponential moving estimate of the squared gradient to compute adaptive step sizes, achieving better convergence than SGD in face of noisy objectives. However, Adam can have undesirable convergence behaviors due to unstable or extreme adaptive learning rates. Methods such as AMSGrad and AdaBound have been proposed to stabilize the adaptive learning rates of Adam in the later stage of training, but they do not outperform Adam in some practical tasks such as training Transformers \cite{transformer}. In this paper, we propose an adaptive learning rate principle, in which the running mean of squared gradient in Adam is replaced by a weighted mean, with weights chosen to maximize the estimated variance of each coordinate. This results in a faster adaptation to the local gradient variance, which leads to more desirable empirical convergence behaviors than Adam. We prove the proposed algorithm converges under mild assumptions for nonconvex stochastic optimization problems, and demonstrate the improved efficacy of our adaptive averaging approach on machine translation, natural language understanding and large-batch pretraining of BERT. The code is available at https://github.com/zhuchen03/MaxVA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源