论文标题
用于在线多任务学习的分布式原始偶二优化
Distributed Primal-Dual Optimization for Online Multi-Task Learning
论文作者
论文摘要
常规的在线多任务学习算法受到了两个关键局限性:1)通过向中央机器传递较高的顺序数据速度引起的繁重沟通; 2)用于构建任务相关性的昂贵运行时复杂性。为了解决这些问题,在本文中,我们考虑了一个设置,在该设置中,多个任务位于不同位置,其中一个任务可以与他人同步数据以利用相关任务的知识。具体而言,我们提出了一种自适应原始偶算法,该算法不仅捕获了对抗性学习中特定于任务的噪声,而且还通过运行时效率进行了无投射更新。此外,我们的模型非常适合分散的周期性连接任务,因为它允许能量静止不动或带宽约束任务推迟更新。理论结果证明了我们分布式算法的收敛保证,并以最佳的遗憾。经验结果证实,所提出的模型在各种现实世界数据集上非常有效。
Conventional online multi-task learning algorithms suffer from two critical limitations: 1) Heavy communication caused by delivering high velocity of sequential data to a central machine; 2) Expensive runtime complexity for building task relatedness. To address these issues, in this paper we consider a setting where multiple tasks are geographically located in different places, where one task can synchronize data with others to leverage knowledge of related tasks. Specifically, we propose an adaptive primal-dual algorithm, which not only captures task-specific noise in adversarial learning but also carries out a projection-free update with runtime efficiency. Moreover, our model is well-suited to decentralized periodic-connected tasks as it allows the energy-starved or bandwidth-constraint tasks to postpone the update. Theoretical results demonstrate the convergence guarantee of our distributed algorithm with an optimal regret. Empirical results confirm that the proposed model is highly effective on various real-world datasets.