论文标题
使用安全限制学习:受限的MDP的增强学习的样本复杂性
Learning with Safety Constraints: Sample Complexity of Reinforcement Learning for Constrained MDPs
论文作者
论文摘要
许多物理系统都有基本的安全考虑因素,要求采用的政策确保一组约束的满意度。分析公式通常采用约束马尔可夫决策过程(CMDP)的形式。我们关注CMDP未知的情况,RL算法获得样品以发现模型并计算最佳约束策略。我们的目标是在PAC中表征安全限制与确保所需准确性水平的样本数量之间的关系 - 客观最大化和约束满意度。我们探索了两类RL算法,即(i)基于生成模型的方法,其中最初采用样品以估算模型,以及(ii)一种在线方法,其中该模型以样本的获取而更新。我们的主要发现是,与不受约束的状态的最著名界限相比,约束RL算法的样本复杂性增加了一个对数的对数的因素增加,这表明该方法可以在实际系统中很容易使用。
Many physical systems have underlying safety considerations that require that the policy employed ensures the satisfaction of a set of constraints. The analytical formulation usually takes the form of a Constrained Markov Decision Process (CMDP). We focus on the case where the CMDP is unknown, and RL algorithms obtain samples to discover the model and compute an optimal constrained policy. Our goal is to characterize the relationship between safety constraints and the number of samples needed to ensure a desired level of accuracy -- both objective maximization and constraint satisfaction -- in a PAC sense. We explore two classes of RL algorithms, namely, (i) a generative model based approach, wherein samples are taken initially to estimate a model, and (ii) an online approach, wherein the model is updated as samples are obtained. Our main finding is that compared to the best known bounds of the unconstrained regime, the sample complexity of constrained RL algorithms are increased by a factor that is logarithmic in the number of constraints, which suggests that the approach may be easily utilized in real systems.