论文标题

具有分解的DEC-MDP的双曲线PDE的数值方法的多代理学习

Multi-Agent Learning of Numerical Methods for Hyperbolic PDEs with Factored Dec-MDP

论文作者

Fu, Yiwei, Kapilavai, Dheeraj S. K., Way, Elliot

论文摘要

分散的马尔可夫决策过程(DEC-MDP)是对多代理系统中的顺序决策问题进行建模的框架。在本文中,我们正式地学习了双曲偏微分方程(PDES)的数值方法,特别是加权基本非振荡(WENO)方案,作为偏向于DECMDP问题。我们表明,不同的奖励表述会导致加强学习(RL)或行为克隆,并且可以通过策略梯度算法为RL配方下的所有代理人学习均匀的政策。由于训练有素的药物仅对其本地观察作用,因此多代理系统可以用作双曲线PDE的一般数值方法,并概括为不同的空间离散,情节长度,维度,尺寸甚至方程式类型。

Factored decentralized Markov decision process (Dec-MDP) is a framework for modeling sequential decision making problems in multi-agent systems. In this paper, we formalize the learning of numerical methods for hyperbolic partial differential equations (PDEs), specifically the Weighted Essentially Non-Oscillatory (WENO) scheme, as a factored Dec-MDP problem. We show that different reward formulations lead to either reinforcement learning (RL) or behavior cloning, and a homogeneous policy could be learned for all agents under the RL formulation with a policy gradient algorithm. Because the trained agents only act on their local observations, the multi-agent system can be used as a general numerical method for hyperbolic PDEs and generalize to different spatial discretizations, episode lengths, dimensions, and even equation types.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源