论文标题
在线凸优化的分布式和不确定的近端梯度方法
Distributed and Inexact Proximal Gradient Method for Online Convex Optimization
论文作者
论文摘要
本文开发和分析了一种在线分布式近端梯度法(DPGM),以解决时变的复合凸优化问题。网络的每个节点都有本地成本,其中包括光滑的强凸功能和非平滑凸功能,均随时间变化。通过通过连接的通信网络进行协调,节点可以协作跟踪最小化器的轨迹,而无需交换其本地成本功能。 DPGM是以在线方式实施的,即在功能更改之前仅实现有限的步骤的环境。此外,在不精确的情况下分析了该算法,即具有添加噪声的来源,可以代表例如通信噪声或量化。结果表明,在线不精确DPGM的跟踪误差是由收敛线性系统限制的,可以保证在最佳解决方案的邻域内收敛。
This paper develops and analyzes an online distributed proximal-gradient method (DPGM) for time-varying composite convex optimization problems. Each node of the network features a local cost that includes a smooth strongly convex function and a non-smooth convex function, both changing over time. By coordinating through a connected communication network, the nodes collaboratively track the trajectory of the minimizers without exchanging their local cost functions. The DPGM is implemented in an online fashion, that is, in a setting where only a limited number of steps are implemented before the function changes. Moreover, the algorithm is analyzed in an inexact scenario, that is, with a source of additive noise, that can represent e.g. communication noise or quantization. It is shown that the tracking error of the online inexact DPGM is upper-bounded by a convergent linear system, guaranteeing convergence within a neighborhood of the optimal solution.