论文标题
Rapid-Learn:学习在开放世界环境中恢复新颖性的框架
RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments
论文作者
论文摘要
我们提出了Rapid-Learn:学习再次恢复和计划,即一种混合计划和学习方法,以解决适应代理环境中突然和意外变化(即新颖性)的问题。 Rapid-Learn旨在实时制定和求解任务的Markov决策过程(MDP),并能够利用域知识来学习由环境变化引起的任何新动态。它能够利用域知识来学习行动执行者,这可以进一步用于解决执行智能,从而成功执行了计划。这些新颖信息反映在其更新的域模型中。我们通过在受到Minecraft启发的环境环境中引入各种新颖性来证明其功效,并将我们的算法与文献中的转移学习基准进行比较。我们的方法是(1)即使存在多个新颖性,(2)比转移学习RL基准的样本有效,以及(3)与纯符号计划方法相比,(3)强大的模型信息。
We propose RAPid-Learn: Learning to Recover and Plan Again, a hybrid planning and learning method, to tackle the problem of adapting to sudden and unexpected changes in an agent's environment (i.e., novelties). RAPid-Learn is designed to formulate and solve modifications to a task's Markov Decision Process (MDPs) on-the-fly and is capable of exploiting domain knowledge to learn any new dynamics caused by the environmental changes. It is capable of exploiting the domain knowledge to learn action executors which can be further used to resolve execution impasses, leading to a successful plan execution. This novelty information is reflected in its updated domain model. We demonstrate its efficacy by introducing a wide variety of novelties in a gridworld environment inspired by Minecraft, and compare our algorithm with transfer learning baselines from the literature. Our method is (1) effective even in the presence of multiple novelties, (2) more sample efficient than transfer learning RL baselines, and (3) robust to incomplete model information, as opposed to pure symbolic planning approaches.