论文标题

跨描述符的视觉定位和映射

Cross-Descriptor Visual Localization and Mapping

论文作者

Dusmanu, Mihai, Miksik, Ondrej, Schönberger, Johannes L., Pollefeys, Marc

论文摘要

视觉定位和映射是大多数混合现实和机器人系统系统的关键技术。大多数最先进的方法都依赖于本地特征来建立图像之间的对应关系。在本文中,我们提出了三个用于本地化和映射的新颖方案,这些方案需要连续更新功能表示形式,并具有跨不同特征类型匹配的能力。尽管本地化和映射是一个基本的计算机视觉问题,但传统设置认为在整个地图的演变过程中都使用相同的本地功能。因此,每当改变基础功能时,整个过程都会从头开始重复。但是,这在实践中通常是不可能的,因为原始图像通常不会存储,并且重新构建地图可能会导致连接的数字内容丢失。为了克服当前方法的局限性,我们提出了第一个原理解决方案,用于跨描述者定位和映射。我们的数据驱动方法对特征描述符类型不可知,计算要求较低,并且与描述算法的数量线性扩展。广泛的实验证明了我们的方法对各种手工和学习的功能的最先进基准测试的有效性。

Visual localization and mapping is the key technology underlying the majority of mixed reality and robotics systems. Most state-of-the-art approaches rely on local features to establish correspondences between images. In this paper, we present three novel scenarios for localization and mapping which require the continuous update of feature representations and the ability to match across different feature types. While localization and mapping is a fundamental computer vision problem, the traditional setup supposes the same local features are used throughout the evolution of a map. Thus, whenever the underlying features are changed, the whole process is repeated from scratch. However, this is typically impossible in practice, because raw images are often not stored and re-building the maps could lead to loss of the attached digital content. To overcome the limitations of current approaches, we present the first principled solution to cross-descriptor localization and mapping. Our data-driven approach is agnostic to the feature descriptor type, has low computational requirements, and scales linearly with the number of description algorithms. Extensive experiments demonstrate the effectiveness of our approach on state-of-the-art benchmarks for a variety of handcrafted and learned features.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源