论文标题

单眼摄像头定位在带有2D-3D线对应关系的先前激光雷达图中

Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line Correspondences

论文作者

Yu, Huai, Zhen, Weikun, Yang, Wen, Zhang, Ji, Scherer, Sebastian

论文摘要

现有地图中的轻巧摄像头定位对于基于视觉的导航至关重要。当前,视觉和视觉惯性探针测定法(VO \&VIO)技术已发达,以进行状态估计,但不可避免地会积累的漂移和姿势跳跃。为了克服这些问题,我们建议使用直接2D-3D线对应关系在先前的激光雷达图中提出有效的单眼摄像头定位方法。为了处理LIDAR点云和图像之间的外观差异和模态差异,从LiDar地图中脱机提取几何3D线,而从视频序列则在线提取强大的2D线。通过VIO的姿势预测,我们可以有效地获得粗略的2D-3D线对应关系。然后,通过最小化对应关系的投影误差和拒绝异常值的投影误差,可以迭代优化相机的姿势和2D-3D对应关系。 Eurocmav数据集和我们收集的数据集的实验结果表明,所提出的方法可以在结构化环境中有效地估算摄像头姿势,而无需累积的漂移或姿势跳跃。

Light-weight camera localization in existing maps is essential for vision-based navigation. Currently, visual and visual-inertial odometry (VO\&VIO) techniques are well-developed for state estimation but with inevitable accumulated drifts and pose jumps upon loop closure. To overcome these problems, we propose an efficient monocular camera localization method in prior LiDAR maps using direct 2D-3D line correspondences. To handle the appearance differences and modality gaps between LiDAR point clouds and images, geometric 3D lines are extracted offline from LiDAR maps while robust 2D lines are extracted online from video sequences. With the pose prediction from VIO, we can efficiently obtain coarse 2D-3D line correspondences. Then the camera poses and 2D-3D correspondences are iteratively optimized by minimizing the projection error of correspondences and rejecting outliers. Experimental results on the EurocMav dataset and our collected dataset demonstrate that the proposed method can efficiently estimate camera poses without accumulated drifts or pose jumps in structured environments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源