论文标题

具有人造障碍的自适应学习在一般游戏中产生纳什均衡

Adaptive Learning with Artificial Barriers Yielding Nash Equilibria in General Games

论文作者

Hassan, Ismail, Oommen, B. John, Yazidi, Anis

论文摘要

学习自动机(LA)中的人造障碍是一个强大而又爆炸的概念,尽管它是在1980年代首次提出的。引入人工非吸收屏障使LA方案具有弹性,可将其捕获在吸收屏障中,这种现象通常被称为锁定概率的锁定,从而导致融合后独家选择一种动作。在洛杉矶和加强学习领域中,有一个具有人为障碍的方案的理论作品和应用。在本文中,我们设计了一个具有人造障碍的洛杉矶,用于解决随机Bimatrix游戏的一般形式。古典LA系统具有吸收障碍物的特性,它们是游戏理论中的强大工具,并且在有限的信息下被证明会融合到NASH平衡的游戏。但是,解决游戏理论问题的洛杉矶作品流只能解决游戏的鞍点以纯粹的策略存在,并且在没有鞍点的纯粹策略时无法达到混合的NASH平衡。在本文中,通过诉诸于人造障碍的强大概念,我们建议一个LA会收敛到最佳的混合NASH平衡,即使当调用纯粹的策略时可能没有鞍点。我们部署的方案是线性奖励的($ l_ {r-i} $)风味,它是一种吸收的la方案,但是,我们通过以优雅而自然的方式引入人造障碍,从而使其不吸引人,从最初的意义上讲,众所周知的遗产$ l_ {r-i} $可以看作是我们的属于我们的Alsed Aldgorith的barriers fors of Adsed Algorith的实例。此外,我们为洛杉矶的$ s $学习版本带有吸收障碍物,可以处理$ s $学习的环境,其中反馈是连续而不是二进制的,就像$ l_ {r-i} $一样。

Artificial barriers in Learning Automata (LA) is a powerful and yet under-explored concept although it was first proposed in the 1980s. Introducing artificial non-absorbing barriers makes the LA schemes resilient to being trapped in absorbing barriers, a phenomenon which is often referred to as lock in probability leading to an exclusive choice of one action after convergence. Within the field of LA and reinforcement learning in general, there is a sacristy of theoretical works and applications of schemes with artificial barriers. In this paper, we devise a LA with artificial barriers for solving a general form of stochastic bimatrix game. Classical LA systems possess properties of absorbing barriers and they are a powerful tool in game theory and were shown to converge to game's of Nash equilibrium under limited information. However, the stream of works in LA for solving game theoretical problems can merely solve the case where the Saddle Point of the game exists in a pure strategy and fail to reach mixed Nash equilibrium when no Saddle Point exists for a pure strategy. In this paper, by resorting to the powerful concept of artificial barriers, we suggest a LA that converges to an optimal mixed Nash equilibrium even though there may be no Saddle Point when a pure strategy is invoked. Our deployed scheme is of Linear Reward-Inaction ($L_{R-I}$) flavor which is originally an absorbing LA scheme, however, we render it non-absorbing by introducing artificial barriers in an elegant and natural manner, in the sense that that the well-known legacy $L_{R-I}$ scheme can be seen as an instance of our proposed algorithm for a particular choice of the barrier. Furthermore, we present an $S$ Learning version of our LA with absorbing barriers that is able to handle $S$-Learning environment in which the feedback is continuous and not binary as in the case of the $L_{R-I}$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源