论文标题
简单且最佳的随机梯度方法,用于非滑动非coNVEX优化
Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization
论文作者
论文摘要
我们提出和分析了几种随机梯度算法,用于查找固定点或非凸的局部最小值,可能会使用非平滑规则使用者,有限-AM和在线优化问题。首先,我们提出了一种基于降低的偏置降低的简单近端随机梯度算法,称为XSVRG+。我们提供了对Proxsvrg+的干净分析,这表明它的表现优于确定性的近端下降(ProxGD),用于各种Minibatch尺寸,因此解决了Reddi等人中提出的一个开放问题。 (2016b)。此外,Proxsvrg+的使用近近端甲骨文调用要比Proxsvrg(Reddi等人,2016b)少得多,并通过避免完整的梯度计算扩展到在线设置。然后,我们进一步提出了一种基于Sarah(Nguyen等,2017)的最佳算法,称为SSRGD,并表明SSRGD进一步提高了Proxsvrg+的梯度复杂性,并实现了最佳的上限,并与已知的下限相匹配(Fang等人,2018; li e et al al al an al an al and and and and and and and and and and and and and and and a e an a e e an e an al an a al an al al an al al al an a al。 Moreover, we show that both ProxSVRG+ and SSRGD enjoy automatic adaptation with local structure of the objective function such as the Polyak-Łojasiewicz (PL) condition for nonconvex functions in the finite-sum case, i.e., we prove that both of them can automatically switch to faster global linear convergence without any restart performed in prior work ProxSVRG (Reddi et al., 2016b).最后,我们专注于找到$(ε,δ)$的更具挑战性的问题 - 本地最低限度,而不仅仅是找到$ε$ - $ appproxapproximate(一阶)固定点(这可能是一些不稳定的不稳定的鞍点)。我们表明,SSRGD可以通过简单地添加一些随机扰动来找到$(ε,δ)$ - 局部最小值。我们的算法几乎与查找固定点的对应物一样简单,并且达到了相似的最佳速率。
We propose and analyze several stochastic gradient algorithms for finding stationary points or local minimum in nonconvex, possibly with nonsmooth regularizer, finite-sum and online optimization problems. First, we propose a simple proximal stochastic gradient algorithm based on variance reduction called ProxSVRG+. We provide a clean and tight analysis of ProxSVRG+, which shows that it outperforms the deterministic proximal gradient descent (ProxGD) for a wide range of minibatch sizes, hence solves an open problem proposed in Reddi et al. (2016b). Also, ProxSVRG+ uses much less proximal oracle calls than ProxSVRG (Reddi et al., 2016b) and extends to the online setting by avoiding full gradient computations. Then, we further propose an optimal algorithm, called SSRGD, based on SARAH (Nguyen et al., 2017) and show that SSRGD further improves the gradient complexity of ProxSVRG+ and achieves the optimal upper bound, matching the known lower bound of (Fang et al., 2018; Li et al., 2021). Moreover, we show that both ProxSVRG+ and SSRGD enjoy automatic adaptation with local structure of the objective function such as the Polyak-Łojasiewicz (PL) condition for nonconvex functions in the finite-sum case, i.e., we prove that both of them can automatically switch to faster global linear convergence without any restart performed in prior work ProxSVRG (Reddi et al., 2016b). Finally, we focus on the more challenging problem of finding an $(ε, δ)$-local minimum instead of just finding an $ε$-approximate (first-order) stationary point (which may be some bad unstable saddle points). We show that SSRGD can find an $(ε, δ)$-local minimum by simply adding some random perturbations. Our algorithm is almost as simple as its counterpart for finding stationary points, and achieves similar optimal rates.