论文标题

自我监督的,可区分的卡尔曼过滤器,用于不确定性感知视觉惯性探测器

A Self-Supervised, Differentiable Kalman Filter for Uncertainty-Aware Visual-Inertial Odometry

论文作者

Wagstaff, Brandon, Wise, Emmett, Kelly, Jonathan

论文摘要

传统上,视觉惯性进程(VIO)系统依赖于过滤或基于优化的技术来进行自我估计。尽管这些方法在名义条件下是准确的,但它们在严重的照明变化,快速相机运动或在低文本图像序列上容易发生故障。基于学习的系统有可能在具有挑战性的环境中胜过经典的实现,但是目前,在名义设置中的执行效果不如经典方法。在此,我们介绍了一个培训混合VIO系统的框架,该系统利用了学习和基于标准过滤的状态估计的优势。我们的方法建立在具有IMU驱动的过程模型和强大的,神经网络衍生的相对姿势测量模型的Kalman过滤器上。 Kalman过滤框架的使用可以在训练时间和测试时对不确定性进行原则治疗。我们表明,我们的自我监督损失公式的表现优于类似的,有监督的方法,同时也可以在线再培训。我们在Euroc数据集的视觉降低版本上评估了系统,发现我们的估计器在经典估计器持续差异的情况下,准确性却没有显着降低。最后,通过正确利用IMU测量中包含的度量信息,我们的系统能够恢复度量场景量表,而其他自我监督的单眼VIO方法则不能。

Visual-inertial odometry (VIO) systems traditionally rely on filtering or optimization-based techniques for egomotion estimation. While these methods are accurate under nominal conditions, they are prone to failure during severe illumination changes, rapid camera motions, or on low-texture image sequences. Learning-based systems have the potential to outperform classical implementations in challenging environments, but, currently, do not perform as well as classical methods in nominal settings. Herein, we introduce a framework for training a hybrid VIO system that leverages the advantages of learning and standard filtering-based state estimation. Our approach is built upon a differentiable Kalman filter, with an IMU-driven process model and a robust, neural network-derived relative pose measurement model. The use of the Kalman filter framework enables the principled treatment of uncertainty at training time and at test time. We show that our self-supervised loss formulation outperforms a similar, supervised method, while also enabling online retraining. We evaluate our system on a visually degraded version of the EuRoC dataset and find that our estimator operates without a significant reduction in accuracy in cases where classical estimators consistently diverge. Finally, by properly utilizing the metric information contained in the IMU measurements, our system is able to recover metric scene scale, while other self-supervised monocular VIO approaches cannot.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源