论文标题

大规模约束凸优化的随机非线性重新缩放方法

A Randomized Nonlinear Rescaling Method in Large-Scale Constrained Convex Optimization

论文作者

Wei, Bo, Haskell, William B., Zhao, Sixiang

论文摘要

我们提出了一种新的随机算法,用于解决具有大量约束(概率很高)的凸优化问题。现有的方法(例如内点或牛顿型算法)很难应用于此类问题,因为它们对Hessians和Matrix倒置具有昂贵的计算和存储要求。我们的算法基于非线性缩放(NLR),该算法是Griva和Polyak {[[{Math。程序。,106(2):237-259,2006}}}。 NLR通过转换约束函数引入了等效问题,将相应的增强拉格朗吉亚人用于给定的双重变量,然后使用此最小化器来更新下一次迭代的双变量。每次迭代时的原始更新是无约束的有限和最小化问题的解决方案,其中术语由当前的双重变量加权。我们使用随机的一阶算法来执行这些原始更新,它们特别适合它们。特别是,我们将缩放的双变量用作每个原始更新的采样分布,我们表明此分布是所有概率分布中的最佳分布。我们通过证明算法的数值性能来结束。

We propose a new randomized algorithm for solving convex optimization problems that have a large number of constraints (with high probability). Existing methods like interior-point or Newton-type algorithms are hard to apply to such problems because they have expensive computation and storage requirements for Hessians and matrix inversions. Our algorithm is based on nonlinear rescaling (NLR), which is a primal-dual-type algorithm by Griva and Polyak {[{Math. Program., 106(2):237-259, 2006}]}. NLR introduces an equivalent problem through a transformation of the constraint functions, minimizes the corresponding augmented Lagrangian for given dual variables, and then uses this minimizer to update the dual variables for the next iteration. The primal update at each iteration is the solution of an unconstrained finite sum minimization problem where the terms are weighted by the current dual variables. We use randomized first-order algorithms to do these primal updates, for which they are especially well suited. In particular, we use the scaled dual variables as the sampling distribution for each primal update, and we show that this distribution is the optimal one among all probability distributions. We conclude by demonstrating the favorable numerical performance of our algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源