论文标题

合作的异质深度强化学习

Cooperative Heterogeneous Deep Reinforcement Learning

论文作者

Zheng, Han, Wei, Pengfei, Jiang, Jing, Long, Guodong, Lu, Qinghua, Zhang, Chengqi

论文摘要

已经提出了许多深厚的增强学习者,并且每个学习者都有其优势和缺陷。在这项工作中,我们提出了一个合作的异质性深钢筋学习(CHDRL)框架,该框架可以通过整合异质代理的优势来学习政策。具体来说,我们提出了一个合作学习框架,将异质代理分为两个类别:全球代理商和地方代理商。全球代理是可以利用其他代理商的经验的违规代理商。当地的代理是可以有效探索本地的政府代理或基于人群的进化算法(EAS)。我们采用样本效率的全球代理,以指导当地代理的学习,以便当地代理可以从样本效率高效的代理中受益并同时保持其优势,例如稳定性。全球代理也受益于有效的本地搜索。关于Mujoco基准的一系列连续控制任务的实验研究表明,与最先进的基线相比,CHDRL的性能更好。

Numerous deep reinforcement learning agents have been proposed, and each of them has its strengths and flaws. In this work, we present a Cooperative Heterogeneous Deep Reinforcement Learning (CHDRL) framework that can learn a policy by integrating the advantages of heterogeneous agents. Specifically, we propose a cooperative learning framework that classifies heterogeneous agents into two classes: global agents and local agents. Global agents are off-policy agents that can utilize experiences from the other agents. Local agents are either on-policy agents or population-based evolutionary algorithms (EAs) agents that can explore the local area effectively. We employ global agents, which are sample-efficient, to guide the learning of local agents so that local agents can benefit from sample-efficient agents and simultaneously maintain their advantages, e.g., stability. Global agents also benefit from effective local searches. Experimental studies on a range of continuous control tasks from the Mujoco benchmark show that CHDRL achieves better performance compared with state-of-the-art baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源