论文标题
通过栖息地在真实环境中体现的视觉导航上
On Embodied Visual Navigation in Real Environments Through Habitat
论文作者
论文摘要
当通过强化学习对大量视觉观察培训时,基于深度学习的视觉导航模型可以学习有效的政策。不幸的是,在现实世界中收集所需的经验需要部署机器人平台,该平台既昂贵又耗时。为了应对此限制,已经提出了几个模拟平台,以便在虚拟环境上有效地培训视觉导航策略。尽管它们提供了优势,但模拟器在外观和物理动态方面表现出有限的现实主义,导致导航政策不会在现实世界中概括。 在本文中,我们提出了一种基于栖息地模拟器的工具,该工具利用了环境的真实世界图像以及传感器和执行器噪声模型,以产生更逼真的导航事件。我们执行一系列实验,以评估此类策略使用虚拟和现实世界图像概括的能力,并通过无监督的域适应方法进行了观察。我们还评估了传感器和驱动噪声对导航性能的影响,并研究它是否允许学习更多强大的导航政策。我们表明,我们的工具可以有效地帮助培训和评估现实世界观测的导航政策,而无需在现实世界中运行导航pisodes。
Visual navigation models based on deep learning can learn effective policies when trained on large amounts of visual observations through reinforcement learning. Unfortunately, collecting the required experience in the real world requires the deployment of a robotic platform, which is expensive and time-consuming. To deal with this limitation, several simulation platforms have been proposed in order to train visual navigation policies on virtual environments efficiently. Despite the advantages they offer, simulators present a limited realism in terms of appearance and physical dynamics, leading to navigation policies that do not generalize in the real world. In this paper, we propose a tool based on the Habitat simulator which exploits real world images of the environment, together with sensor and actuator noise models, to produce more realistic navigation episodes. We perform a range of experiments to assess the ability of such policies to generalize using virtual and real-world images, as well as observations transformed with unsupervised domain adaptation approaches. We also assess the impact of sensor and actuation noise on the navigation performance and investigate whether it allows to learn more robust navigation policies. We show that our tool can effectively help to train and evaluate navigation policies on real-world observations without running navigation pisodes in the real world.