论文标题
协作多代理增强学习的分布式价值函数近似
Distributed Value Function Approximation for Collaborative Multi-Agent Reinforcement Learning
论文作者
论文摘要
在本文中,我们提出了几种新型的基于分布梯度的时间差算法,用于在马尔可夫决策过程中具有严格的信息结构约束的马尔可夫决策过程中的多代代理非政策学习,从而将通信限制在小社区中。该算法由:1)基于单代理的非政策梯度时间差学习算法的本地参数更新,包括具有状态依赖参数的资格痕迹,以及2)线性随机时间变化的共识方案,以有名图表示。所提出的算法因其形式,资格痕迹的定义,时间尺度的选择以及结合共识迭代的方式而有所不同。该论文的主要贡献是基于基础Feller-Markov过程的一般特性和随机时间变化的共识模型的收敛分析。我们在一般假设下证明,所有提出的算法生成的参数估计值弱收敛到具有精确定义的不变集的相应普通微分方程(ODE)。它证明了如何在较弱的信息结构约束下将所采用的方法应用于时间差异算法。通过制定和分析渐近随机微分方程来证明所提出的算法的差异效应。提供了通信网络设计的具体指南。算法的出色特性通过特征模拟结果说明。
In this paper we propose several novel distributed gradient-based temporal difference algorithms for multi-agent off-policy learning of linear approximation of the value function in Markov decision processes with strict information structure constraints, limiting inter-agent communications to small neighborhoods. The algorithms are composed of: 1) local parameter updates based on single-agent off-policy gradient temporal difference learning algorithms, including eligibility traces with state dependent parameters, and 2) linear stochastic time varying consensus schemes, represented by directed graphs. The proposed algorithms differ by their form, definition of eligibility traces, selection of time scales and the way of incorporating consensus iterations. The main contribution of the paper is a convergence analysis based on the general properties of the underlying Feller-Markov processes and the stochastic time varying consensus model. We prove, under general assumptions, that the parameter estimates generated by all the proposed algorithms weakly converge to the corresponding ordinary differential equations (ODE) with precisely defined invariant sets. It is demonstrated how the adopted methodology can be applied to temporal-difference algorithms under weaker information structure constraints. The variance reduction effect of the proposed algorithms is demonstrated by formulating and analyzing an asymptotic stochastic differential equation. Specific guidelines for communication network design are provided. The algorithms' superior properties are illustrated by characteristic simulation results.