论文标题

通过在线计划学习非政策

Learning Off-Policy with Online Planning

论文作者

Sikchi, Harshit, Zhou, Wenxuan, Held, David

论文摘要

低数据和风险敏感域中的强化学习(RL)需要表现且灵活的部署策略,这些策略可以在部署过程中很容易纳入约束。这样的一类策略是半参数H-Step LookAhead策略,该策略使用轨迹优化的轨迹优化选择了具有终端值函数的固定视野的动力学模型。在这项工作中,我们通过学习的模型和一个由无模型的非政策算法学到的终端价值功能调查了H-Step LookAhead的新颖实例,该算法将Learning Learning用在线规划(LOOP)为单位。我们提供了对该方法的理论分析,这表明模型错误与价值函数错误之间的权衡,并在经验上证明这种权衡对深度强化学习是有益的。此外,我们在此框架中确定了“ Actor Divergence”问题,并提出了Actor正规化控制(ARC),这是一种修改的轨迹优化程序。我们在离线和在线RL的一组机器人任务上评估了我们的方法,并证明了性能的提高。我们还展示了循环在部署过程中与一组导航环境中合并安全限制的灵活性。我们证明,基于其在各种重要的RL设置中的强大性能,LOOP是机器人应用程序的理想框架。项目视频和详细信息可在https://hari-sikchi.github.io/loop上找到。

Reinforcement learning (RL) in low-data and risk-sensitive domains requires performant and flexible deployment policies that can readily incorporate constraints during deployment. One such class of policies are the semi-parametric H-step lookahead policies, which select actions using trajectory optimization over a dynamics model for a fixed horizon with a terminal value function. In this work, we investigate a novel instantiation of H-step lookahead with a learned model and a terminal value function learned by a model-free off-policy algorithm, named Learning Off-Policy with Online Planning (LOOP). We provide a theoretical analysis of this method, suggesting a tradeoff between model errors and value function errors and empirically demonstrate this tradeoff to be beneficial in deep reinforcement learning. Furthermore, we identify the "Actor Divergence" issue in this framework and propose Actor Regularized Control (ARC), a modified trajectory optimization procedure. We evaluate our method on a set of robotic tasks for Offline and Online RL and demonstrate improved performance. We also show the flexibility of LOOP to incorporate safety constraints during deployment with a set of navigation environments. We demonstrate that LOOP is a desirable framework for robotics applications based on its strong performance in various important RL settings. Project video and details can be found at https://hari-sikchi.github.io/loop .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源