论文标题
通过重型政策处理连续控制机器人技术中的稀疏奖励
Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
论文作者
论文摘要
在本文中,我们提出了一种新颖的重尾随机策略梯度(HT-PSG)算法,以应对连续控制问题中稀疏奖励的挑战。稀疏的奖励在连续控制机器人技术任务(例如操纵和导航)中很常见,并且由于对状态空间的价值功能的非平凡估计而使学习问题变得困难。这需要奖励成型或针对稀疏奖励环境的专家演示。但是,获得高质量的示威游行非常昂贵,有时甚至是不可能的。我们提出了一个重型策略参数化以及基于动量的策略梯度跟踪方案(HT-SPG),以引起对算法的稳定探索行为。提出的算法不需要访问专家演示。我们测试了HT-SPG在连续控制的各种基准测试任务上的性能,并具有稀疏的奖励,例如1d Mario,病理山车,Openai体育馆的稀疏摆和稀疏的Mujoco环境(Hopper-V2)。就高平均累积奖励而言,我们在所有任务中表现出一致的性能提高。 HT-SPG还证明了最低样品的收敛速度提高,从而强调了我们提出的算法的样品效率。
In this paper, we present a novel Heavy-Tailed Stochastic Policy Gradient (HT-PSG) algorithm to deal with the challenges of sparse rewards in continuous control problems. Sparse reward is common in continuous control robotics tasks such as manipulation and navigation, and makes the learning problem hard due to non-trivial estimation of value functions over the state space. This demands either reward shaping or expert demonstrations for the sparse reward environment. However, obtaining high-quality demonstrations is quite expensive and sometimes even impossible. We propose a heavy-tailed policy parametrization along with a modified momentum-based policy gradient tracking scheme (HT-SPG) to induce a stable exploratory behavior to the algorithm. The proposed algorithm does not require access to expert demonstrations. We test the performance of HT-SPG on various benchmark tasks of continuous control with sparse rewards such as 1D Mario, Pathological Mountain Car, Sparse Pendulum in OpenAI Gym, and Sparse MuJoCo environments (Hopper-v2). We show consistent performance improvement across all tasks in terms of high average cumulative reward. HT-SPG also demonstrates improved convergence speed with minimum samples, thereby emphasizing the sample efficiency of our proposed algorithm.