论文标题
进化选择性模仿:无需示威者而模仿学习的可解释的代理人
Evolutionary Selective Imitation: Interpretable Agents by Imitation Learning Without a Demonstrator
论文作者
论文摘要
我们提出了一种通过进化策略(ES)训练代理的新方法,在该方法中,我们迭代地改进了一组样品以模仿:从随机组开始,在每一个迭代中,我们在迄今为止发现的最佳轨迹的样本中替换了样本的子集。该集合的评估程序是通过监督学习训练一个随机初始化的神经网络(NN),以模仿集合,然后对环境执行获得的策略。因此,我们的方法是基于健身函数的ES,该功能表达了模仿不断发展的数据子集的有效性。这与其他ES技术直接遍及政策的权重相反。通过观察代理选择学习的样本,可以比在NN学习中更明确地解释和评估代理的不断发展的策略。在我们的实验中,我们通过模仿只有几千个参数的NN的进化选择的25个样品,培训了一个代理来解决OpenAI Gym环境BipedalWalker-V3。我们进一步测试了Procgen Game掠夺的方法,并在此处表明,该方法是可解释,小型,健壮且有效的替代方法,可替代其他ES或策略梯度方法。
We propose a new method for training an agent via an evolutionary strategy (ES), in which we iteratively improve a set of samples to imitate: Starting with a random set, in every iteration we replace a subset of the samples with samples from the best trajectories discovered so far. The evaluation procedure for this set is to train, via supervised learning, a randomly initialised neural network (NN) to imitate the set and then execute the acquired policy against the environment. Our method is thus an ES based on a fitness function that expresses the effectiveness of imitating an evolving data subset. This is in contrast to other ES techniques that iterate over the weights of the policy directly. By observing the samples that the agent selects for learning, it is possible to interpret and evaluate the evolving strategy of the agent more explicitly than in NN learning. In our experiments, we trained an agent to solve the OpenAI Gym environment Bipedalwalker-v3 by imitating an evolutionarily selected set of only 25 samples with a NN with only a few thousand parameters. We further test our method on the Procgen game Plunder and show here as well that the proposed method is an interpretable, small, robust and effective alternative to other ES or policy gradient methods.