论文标题

通过对手模型学习增强的滚动范围进化算法:格斗游戏AI竞赛的结果

Enhanced Rolling Horizon Evolution Algorithm with Opponent Model Learning: Results for the Fighting Game AI Competition

论文作者

Tang, Zhentao, Zhu, Yuanheng, Zhao, Dongbin, Lucas, Simon M.

论文摘要

格斗游戏AI比赛(FTGAIC)为2播放器视频游戏AI提供了具有挑战性的基准。挑战源于大型动作空间,角色和能力的各种风格以及游戏的实时性质。在本文中,我们提出了一种新颖的算法,将滚动式进化算法(RHEA)与对手模型学习结合在一起。该方法很容易适用于任何2播放器视频游戏。与常规RHEA相反,根据对手的历史观察结果,分别提出了一种对手模型,并分别通过跨膜片和策略梯度学习和Q学习的跨凝结和加强学习来优化对手模型。该模型是在现场游戏期间学习的。借助学识渊博的对手模型,扩展的RHEA能够根据对手可能会做什么制定更现实的计划。这往往会带来更好的结果。我们将我们的方法直接与FTGAIC 2018竞赛的机器人进行了比较,并发现我们的三个角色都显着胜过所有这些方法。此外,我们提出的基于策略级的对手模型是唯一一个没有使用Monte-Carlo Tree搜索(MCT)的机器人在2019年竞赛中获得第二名的前五名机器人之一,同时使用比赢家少得多的领域知识。

The Fighting Game AI Competition (FTGAIC) provides a challenging benchmark for 2-player video game AI. The challenge arises from the large action space, diverse styles of characters and abilities, and the real-time nature of the game. In this paper, we propose a novel algorithm that combines Rolling Horizon Evolution Algorithm (RHEA) with opponent model learning. The approach is readily applicable to any 2-player video game. In contrast to conventional RHEA, an opponent model is proposed and is optimized by supervised learning with cross-entropy and reinforcement learning with policy gradient and Q-learning respectively, based on history observations from opponent. The model is learned during the live gameplay. With the learned opponent model, the extended RHEA is able to make more realistic plans based on what the opponent is likely to do. This tends to lead to better results. We compared our approach directly with the bots from the FTGAIC 2018 competition, and found our method to significantly outperform all of them, for all three character. Furthermore, our proposed bot with the policy-gradient-based opponent model is the only one without using Monte-Carlo Tree Search (MCTS) among top five bots in the 2019 competition in which it achieved second place, while using much less domain knowledge than the winner.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源