论文标题
杯:批评家指导的政策再利用
CUP: Critic-Guided Policy Reuse
论文作者
论文摘要
重复使用以前的政策的能力是人类智力的重要方面。为了实现有效的政策重复使用,深入的强化学习(DRL)代理需要决定何时重复使用以及要重新使用哪些来源政策。以前的方法通过将额外组件引入基础算法来解决此问题,例如对源策略的分层高级策略,或对目标任务上源策略的价值函数的估计。但是,训练这些组件会导致优化非平稳性或沉重的采样成本,从而大大损害了转移的有效性。为了解决这个问题,我们提出了一种称为评论家的政策重用(CUP)的新型政策再利用算法,该算法避免培训任何额外的组件并有效地重用源政策。杯子利用批评者(参与者批评方法中的常见组成部分)来评估和选择源政策。在每个州,CUP选择的源政策比当前的目标政策具有最大的一步改进,并构成指导政策。从理论上讲,指导政策是对当前目标政策的单调改进。然后将目标策略正规化,以模仿指导政策以执行有效的政策搜索。经验结果表明,杯子可以实现有效的转移,并且显着优于基线算法。
The ability to reuse previous policies is an important aspect of human intelligence. To achieve efficient policy reuse, a Deep Reinforcement Learning (DRL) agent needs to decide when to reuse and which source policies to reuse. Previous methods solve this problem by introducing extra components to the underlying algorithm, such as hierarchical high-level policies over source policies, or estimations of source policies' value functions on the target task. However, training these components induces either optimization non-stationarity or heavy sampling cost, significantly impairing the effectiveness of transfer. To tackle this problem, we propose a novel policy reuse algorithm called Critic-gUided Policy reuse (CUP), which avoids training any extra components and efficiently reuses source policies. CUP utilizes the critic, a common component in actor-critic methods, to evaluate and choose source policies. At each state, CUP chooses the source policy that has the largest one-step improvement over the current target policy, and forms a guidance policy. The guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy. Then the target policy is regularized to imitate the guidance policy to perform efficient policy search. Empirical results demonstrate that CUP achieves efficient transfer and significantly outperforms baseline algorithms.