论文标题
离线元学习探索
Offline Meta Learning of Exploration
论文作者
论文摘要
考虑以下实例的离线元加强学习(OMRL)问题:鉴于$ n $常规RL代理的完整培训日志,对$ n $不同的任务进行了培训,设计了一个可以从同一任务分配中快速最大化的新任务中最大化奖励的元代理。特别是,虽然每个常规RL代理都探索并利用了自己的不同任务,但元代理必须确定数据中的规律性,这些规律性在看不见的任务中导致有效的探索/开发。在这里,我们采用贝叶斯RL(BRL)视图,并寻求从离线数据中学习贝叶斯最佳政策。在最近的Varibad BRL方法的基础上,我们开发了一种非政策BRL方法,该方法学会了基于自适应神经信念估计的探索策略。但是,学会从离线数据推断出这种信念会带来一个新的可识别性问题,我们称MDP歧义。我们表征了问题,并通过数据收集和修改程序提出决议。最后,我们在各种领域(包括稀疏奖励任务)上评估了我们的框架,并展示了学习有效探索行为的学习,这些探索行为与数据中任何RL代理使用的探索在质量上有所不同。
Consider the following instance of the Offline Meta Reinforcement Learning (OMRL) problem: given the complete training logs of $N$ conventional RL agents, trained on $N$ different tasks, design a meta-agent that can quickly maximize reward in a new, unseen task from the same task distribution. In particular, while each conventional RL agent explored and exploited its own different task, the meta-agent must identify regularities in the data that lead to effective exploration/exploitation in the unseen task. Here, we take a Bayesian RL (BRL) view, and seek to learn a Bayes-optimal policy from the offline data. Building on the recent VariBAD BRL approach, we develop an off-policy BRL method that learns to plan an exploration strategy based on an adaptive neural belief estimate. However, learning to infer such a belief from offline data brings a new identifiability issue we term MDP ambiguity. We characterize the problem, and suggest resolutions via data collection and modification procedures. Finally, we evaluate our framework on a diverse set of domains, including difficult sparse reward tasks, and demonstrate learning of effective exploration behavior that is qualitatively different from the exploration used by any RL agent in the data.