论文标题
在大型排队系统中学习延迟信息负载平衡的平均场控制
Learning Mean-Field Control for Delayed Information Load Balancing in Large Queuing Systems
论文作者
论文摘要
近年来,数据中心和云服务的容量和并行处理能力大大提高。要充分利用所述分布式系统,必须实现并行排队架构的最佳负载平衡。现有的最新解决方案未能考虑沟通延迟对许多客户的大型系统的行为的影响。在这项工作中,我们考虑了一个多代理负载平衡系统,其中包含延迟信息,包括许多客户(负载平衡器)和许多并行的队列。为了获得可处理的解决方案,我们通过精确离散化在离散时间内将该系统建模为具有扩大状态行动空间的平均场控制问题。随后,我们应用政策梯度增强学习算法来找到最佳的负载平衡解决方案。在这里,离散时间系统模型包含了同步延迟,在该延迟下,在所有客户端,队列状态信息同步广播和更新。然后,我们在大型系统中为我们的方法提供了理论性能保证。最后,使用实验,我们证明了我们的方法不仅可扩展,而且与在同步延迟存在下的最新联合最低版本(JSQ)(JSQ)和其他策略相比,还表现出良好的性能。
Recent years have seen a great increase in the capacity and parallel processing power of data centers and cloud services. To fully utilize the said distributed systems, optimal load balancing for parallel queuing architectures must be realized. Existing state-of-the-art solutions fail to consider the effect of communication delays on the behaviour of very large systems with many clients. In this work, we consider a multi-agent load balancing system, with delayed information, consisting of many clients (load balancers) and many parallel queues. In order to obtain a tractable solution, we model this system as a mean-field control problem with enlarged state-action space in discrete time through exact discretization. Subsequently, we apply policy gradient reinforcement learning algorithms to find an optimal load balancing solution. Here, the discrete-time system model incorporates a synchronization delay under which the queue state information is synchronously broadcasted and updated at all clients. We then provide theoretical performance guarantees for our methodology in large systems. Finally, using experiments, we prove that our approach is not only scalable but also shows good performance when compared to the state-of-the-art power-of-d variant of the Join-the-Shortest-Queue (JSQ) and other policies in the presence of synchronization delays.