论文标题

何时更新您的模型:受限制的基于模型的强化学习

When to Update Your Model: Constrained Model-based Reinforcement Learning

论文作者

Ji, Tianying, Luo, Yu, Sun, Fuchun, Jing, Mingxuan, He, Fengxiang, Huang, Wenbing

论文摘要

具有保证单调改进的基于模型的RL(MBRL)算法的设计和分析算法是具有挑战性的,这主要是由于策略优化和模型学习之间的相互依存关系。现有的差异界限通常忽略了模型转移的影响,其相应的算法容易通过大幅度模型更新来降低性能。在这项工作中,我们首先提出了一种新颖而一般的理论方案,用于对MBRL的不重新绩效保证。我们的后续派出界限揭示了模型转移与绩效改善之间的关系。这些发现鼓励我们制定一个约束的下限优化问题,以允许MBRL的单调性。另一个例子表明,来自动态变化数量的探索的学习模型最终使回报受益。通过这些分析的激励,我们设计了一种简单但有效的算法CMLO(受约束的模型换档较低限制优化),通过引入事件触发的机制,该机制灵活地确定何时更新模型。实验表明,当采用各种策略优化方法时,CMLO超过了其他最新方法,并产生提升。

Designing and analyzing model-based RL (MBRL) algorithms with guaranteed monotonic improvement has been challenging, mainly due to the interdependence between policy optimization and model learning. Existing discrepancy bounds generally ignore the impacts of model shifts, and their corresponding algorithms are prone to degrade performance by drastic model updating. In this work, we first propose a novel and general theoretical scheme for a non-decreasing performance guarantee of MBRL. Our follow-up derived bounds reveal the relationship between model shifts and performance improvement. These discoveries encourage us to formulate a constrained lower-bound optimization problem to permit the monotonicity of MBRL. A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns. Motivated by these analyses, we design a simple but effective algorithm CMLO (Constrained Model-shift Lower-bound Optimization), by introducing an event-triggered mechanism that flexibly determines when to update the model. Experiments show that CMLO surpasses other state-of-the-art methods and produces a boost when various policy optimization methods are employed.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源