论文标题
仔细观察政策梯度算法中无效的动作掩蔽
A Closer Look at Invalid Action Masking in Policy Gradient Algorithms
论文作者
论文摘要
近年来,在许多具有挑战性的策略游戏中,深入的强化学习(DRL)算法已经达到了最先进的表现。由于这些游戏具有复杂的规则,因此根据游戏规则(例如,走进墙壁),从学到的政策预测的完整离散行动分布中采样了一个动作。在政策梯度算法中处理此问题的通常方法是“掩盖”无效的动作,而只是从一组有效动作中进行采样。但是,该过程的含义仍然不足。在本文中,我们1)对这种做法的理论理由,2)经验证明了它的重要性,因为无效行动的空间增长,3)3)通过评估不同的动作掩盖策略,例如在使用掩盖训练的代理人培训剂后,提供了进一步的见解。可以在https://github.com/vwxyzjn/invalid-action-masking上找到源代码
In recent years, Deep Reinforcement Learning (DRL) algorithms have achieved state-of-the-art performance in many challenging strategy games. Because these games have complicated rules, an action sampled from the full discrete action distribution predicted by the learned policy is likely to be invalid according to the game rules (e.g., walking into a wall). The usual approach to deal with this problem in policy gradient algorithms is to "mask out" invalid actions and just sample from the set of valid actions. The implications of this process, however, remain under-investigated. In this paper, we 1) show theoretical justification for such a practice, 2) empirically demonstrate its importance as the space of invalid actions grows, and 3) provide further insights by evaluating different action masking regimes, such as removing masking after an agent has been trained using masking. The source code can be found at https://github.com/vwxyzjn/invalid-action-masking