论文标题

图上的异步优化:在误差界条件下线性收敛

Asynchronous Optimization over Graphs: Linear Convergence under Error Bound Conditions

论文作者

Cannelli, Loris, Facchinei, Francisco, Scutari, Gesualdo, Kungurtsev, Vyacheslav

论文摘要

我们考虑具有部分可分离的目标函数的凸和非凸的约束优化:代理最大程度地减少局部目标函数的总和,每种函数的总和仅由关联的代理知道,并且取决于该试剂的变量和其他剂量的变量。这种分区设置出现在几种实际利益的应用中。据我们所知,我们提出的是第一个分布式的异步算法,并保证这类问题的速率。当目标函数是非convex时,算法可证明以均方根速率收敛到固定溶液,而当物镜在著名的luo-tseng误差界条件下满足时,就可以达到线性速率(比强凸度更严格)。基质完成和套索问题的数值结果显示了我们方法的有效性。

We consider convex and nonconvex constrained optimization with a partially separable objective function: agents minimize the sum of local objective functions, each of which is known only by the associated agent and depends on the variables of that agent and those of a few others. This partitioned setting arises in several applications of practical interest. We propose what is, to the best of our knowledge, the first distributed, asynchronous algorithm with rate guarantees for this class of problems. When the objective function is nonconvex, the algorithm provably converges to a stationary solution at a sublinear rate whereas linear rate is achieved when the objective satisfies under the renowned Luo-Tseng error bound condition (which is less stringent than strong convexity). Numerical results on matrix completion and LASSO problems show the effectiveness of our method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源