论文标题

具有混合功能的密集语义3D地图长期视觉定位

Dense Semantic 3D Map Based Long-Term Visual Localization with Hybrid Features

论文作者

Shi, Tianxin, Cui, Hainan, Song, Zhuo, Shen, Shuhan

论文摘要

视觉定位在许多应用中起着重要作用。但是,由于诸如季节和照明的变化以及天气和夜晚的变化之类的外观变化很大,这对于健壮的长期视觉定位算法仍然是一个巨大的挑战。在本文中,我们使用混合手工制作和学习的特征提出了一种新型的视觉定位方法,并具有密集的语义3D地图。混合功能有助于我们在不同的成像条件下充分利用其优势,而密集的语义图为我们提供了可靠,完整的几何和语义信息,以构建足够的2D-3D匹配对,并具有语义一致性得分。在管道中,我们通过密集模型和查询图像之间的语义一致性检索并为每个候选数据库图像进行评分。然后,语义一致性评分在基于加权RANSAC的PNP姿势求解器中用作软约束。长期视觉定位基准的实验结果证明了我们方法与最先进的方法相比。

Visual localization plays an important role in many applications. However, due to the large appearance variations such as season and illumination changes, as well as weather and day-night variations, it's still a big challenge for robust long-term visual localization algorithms. In this paper, we present a novel visual localization method using hybrid handcrafted and learned features with dense semantic 3D map. Hybrid features help us to make full use of their strengths in different imaging conditions, and the dense semantic map provide us reliable and complete geometric and semantic information for constructing sufficient 2D-3D matching pairs with semantic consistency scores. In our pipeline, we retrieve and score each candidate database image through the semantic consistency between the dense model and the query image. Then the semantic consistency score is used as a soft constraint in the weighted RANSAC-based PnP pose solver. Experimental results on long-term visual localization benchmarks demonstrate the effectiveness of our method compared with state-of-the-arts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源