论文标题
一系列优化问题的加速算法具有约束
Accelerated Algorithms for a Class of Optimization Problems with Constraints
论文作者
论文摘要
本文提出了一个基于高阶调谐器(HT)的加速方式来解决约束优化问题的框架。我们的方法是基于重新解决原始约束问题作为损失函数的不受约束的优化。我们从凸优化问题开始,并确定损失函数为凸的条件。基于洞察力,即即使原始优化问题不是损失函数也可以是凸的,我们将方法扩展到一类NonConvex优化问题。与此方法一起使用HT,使我们能够比基于梯度的最先进的方法更好地实现收敛速率。此外,对于相等约束的优化问题,所提出的方法可确保在整个演化过程中保持状态,无论原始问题的凸度如何。
This paper presents a framework to solve constrained optimization problems in an accelerated manner based on High-Order Tuners (HT). Our approach is based on reformulating the original constrained problem as the unconstrained optimization of a loss function. We start with convex optimization problems and identify the conditions under which the loss function is convex. Building on the insight that the loss function could be convex even if the original optimization problem is not, we extend our approach to a class of nonconvex optimization problems. The use of a HT together with this approach enables us to achieve a convergence rate better than state-of-the-art gradient-based methods. Moreover, for equality-constrained optimization problems, the proposed method ensures that the state remains feasible throughout the evolution, regardless of the convexity of the original problem.