论文标题
用于无约束凸优化的分段保守方法
A piecewise conservative method for unconstrained convex optimization
论文作者
论文摘要
我们考虑了一种基于动态系统的连续时间优化方法,在该系统中,从目标函数产生的保守力场中开始的大量粒子在没有任何摩擦的情况下移动。我们基于动能的平均耗散来制定重新启动标准,并证明了强符号函数的全局收敛结果。使用Symbletic Euler离散方案,我们获得了迭代优化算法。我们已经考虑了一个离散的平均耗散重新启动方案,但是我们还基于确保在每次迭代处确保降低目标函数的降低而引入了一个新的重新启动程序,而不是通过经典梯度方法的一步来实现的目标函数。对于离散的保守算法,最后一次重新启动标准能够保证收敛结果。我们将相同的重新启动方案应用于Nesterov加速梯度(NAG-C),并在数值实验中使用此重新启动的NAG-C作为基准。在考虑的平滑凸问题中,我们的方法显示出比重新启动的NAG-C更快的收敛速率。我们建议将离散保守算法扩展到复合优化:在涉及具有$ \ ell^1 $ regularization的非额定凸功能的数值测试中,它比已知的有效快速迭代迭代收缩率归档算法具有更好的性能。
We consider a continuous-time optimization method based on a dynamical system, where a massive particle starting at rest moves in the conservative force field generated by the objective function, without any kind of friction. We formulate a restart criterion based on the mean dissipation of the kinetic energy, and we prove a global convergence result for strongly-convex functions. Using the Symplectic Euler discretization scheme, we obtain an iterative optimization algorithm. We have considered a discrete mean dissipation restart scheme, but we have also introduced a new restart procedure based on ensuring at each iteration a decrease of the objective function greater than the one achieved by a step of the classical gradient method. For the discrete conservative algorithm, this last restart criterion is capable of guaranteeing a convergence result. We apply the same restart scheme to the Nesterov Accelerated Gradient (NAG-C), and we use this restarted NAG-C as benchmark in the numerical experiments. In the smooth convex problems considered, our method shows a faster convergence rate than the restarted NAG-C. We propose an extension of our discrete conservative algorithm to composite optimization: in the numerical tests involving non-strongly convex functions with $\ell^1$-regularization, it has better performances than the well known efficient Fast Iterative Shrinkage-Thresholding Algorithm, accelerated with an adaptive restart scheme.