论文标题
有限地平线限制马尔可夫决策过程的政策梯度方法
A policy gradient approach for Finite Horizon Constrained Markov Decision Processes
论文作者
论文摘要
无限的地平线设置被广泛用于加强学习问题(RL)。这些总是导致最佳的固定政策。在许多情况下,有限的地平线控制问题引起了人们的关注,对于此类问题,最佳策略通常是随着时间的变化。近来已经流行的另一个环境是强化的强化学习,代理商最大程度地提高了其奖励,同时还旨在满足某些给定的约束标准。但是,仅在固定策略最佳的无限地平线MDP的背景下研究了这种设置。我们在有限的地平线设置中提出了一种约束RL的算法,在固定(有限)时间后,地平线终止。我们在算法中使用函数近似,当状态和行动空间较大或连续时,这是必不可少的,并使用策略梯度方法来查找最佳策略。我们获得的最佳政策取决于阶段,因此通常非平稳。据我们所知,我们的论文介绍了有限地平线设置的第一个政策梯度算法。我们展示了我们的算法与有限的最佳策略的融合。我们还通过实验比较和分析算法的性能,并表明我们的算法的性能比其他一些知名算法更好。
The infinite horizon setting is widely adopted for problems of reinforcement learning (RL). These invariably result in stationary policies that are optimal. In many situations, finite horizon control problems are of interest and for such problems, the optimal policies are time-varying in general. Another setting that has become popular in recent times is of Constrained Reinforcement Learning, where the agent maximizes its rewards while it also aims to satisfy some given constraint criteria. However, this setting has only been studied in the context of infinite horizon MDPs where stationary policies are optimal. We present an algorithm for constrained RL in the Finite Horizon Setting where the horizon terminates after a fixed (finite) time. We use function approximation in our algorithm which is essential when the state and action spaces are large or continuous and use the policy gradient method to find the optimal policy. The optimal policy that we obtain depends on the stage and so is non-stationary in general. To the best of our knowledge, our paper presents the first policy gradient algorithm for the finite horizon setting with constraints. We show the convergence of our algorithm to a constrained optimal policy. We also compare and analyze the performance of our algorithm through experiments and show that our algorithm performs better than some other well known algorithms.