论文标题
具有分布式特征和非平滑目标功能的分散优化
Decentralized Optimization with Distributed Features and Non-Smooth Objective Functions
论文作者
论文摘要
我们开发了一种新的基于共识的分布式算法,用于通过特征分配和非平滑凸目标函数解决学习问题。这样的学习问题是不可分开的,即,关联的目标函数不能直接写入特定于特定的目标函数的总和。为了克服这一挑战,我们将基本优化问题重新定义为双重凸问题,其结构适合使用乘数交替方向方法(ADMM)进行分布式优化。接下来,我们提出了一种新方法来解决与不依赖任何共轭函数的ADMM更新步骤相关的最小化问题。计算相关的共轭函数可能很难甚至是不可行的,尤其是当目标函数不平滑时。为了消除计算任何共轭函数,我们利用块坐标下降算法解决了双重域中的每个ADMM迭代相关的优化问题。与现有相关算法不同,所提出的算法是完全分布的,并且与目标函数的偶联物相结合。从理论上讲,我们证明所提出的算法达到了最佳集中式解决方案。我们还通过模拟确认其网络范围的收敛。
We develop a new consensus-based distributed algorithm for solving learning problems with feature partitioning and non-smooth convex objective functions. Such learning problems are not separable, i.e., the associated objective functions cannot be directly written as a summation of agent-specific objective functions. To overcome this challenge, we redefine the underlying optimization problem as a dual convex problem whose structure is suitable for distributed optimization using the alternating direction method of multipliers (ADMM). Next, we propose a new method to solve the minimization problem associated with the ADMM update step that does not rely on any conjugate function. Calculating the relevant conjugate functions may be hard or even unfeasible, especially when the objective function is non-smooth. To obviate computing any conjugate function, we solve the optimization problem associated with each ADMM iteration in the dual domain utilizing the block coordinate descent algorithm. Unlike the existing related algorithms, the proposed algorithm is fully distributed and does away with the conjugate of the objective function. We prove theoretically that the proposed algorithm attains the optimal centralized solution. We also confirm its network-wide convergence via simulations.