论文标题
轻巧的域自适应绝对姿势回归器使用Barlow Twins目标
A Lightweight Domain Adaptive Absolute Pose Regressor Using Barlow Twins Objective
论文作者
论文摘要
在机器人技术,自动驾驶汽车和增强/虚拟现实中的应用程序中,确定给定图像的相机姿势是一个充满挑战的问题。最近,基于学习的方法已证明对绝对相机姿势估计有效。但是,当概括到不同域时,这些方法不准确。在本文中,引入了绝对姿势回归的域自适应训练框架。在提出的框架中,通过使用生成方法使用Barlow Twins物镜训练平行分支来增强场景图像。平行分支利用基于CNN的轻质绝对姿势回归架构。此外,研究了将空间和渠道关注纳入回归头的旋转预测的功效。我们的方法通过两个数据集,剑桥地标和7Scenes进行评估。结果表明,即使使用大约24倍的拖鞋,激活少12倍,而参数少了5倍,我们的方法也比所有基于CNN的体系结构都优于基于变形金刚的建筑物的所有基于CNN的架构。我们的方法分别在剑桥地标和7Scenes数据集中排名第二和第四。此外,对于在训练过程中未遇到的增强域,我们的方法明显优于MS变形器。此外,显示我们的域自适应框架的性能要比使用相同的CNN主链训练的单个分支模型更好,并具有看不见的分布的所有情况。
Identifying the camera pose for a given image is a challenging problem with applications in robotics, autonomous vehicles, and augmented/virtual reality. Lately, learning-based methods have shown to be effective for absolute camera pose estimation. However, these methods are not accurate when generalizing to different domains. In this paper, a domain adaptive training framework for absolute pose regression is introduced. In the proposed framework, the scene image is augmented for different domains by using generative methods to train parallel branches using Barlow Twins objective. The parallel branches leverage a lightweight CNN-based absolute pose regressor architecture. Further, the efficacy of incorporating spatial and channel-wise attention in the regression head for rotation prediction is investigated. Our method is evaluated with two datasets, Cambridge landmarks and 7Scenes. The results demonstrate that, even with using roughly 24 times fewer FLOPs, 12 times fewer activations, and 5 times fewer parameters than MS-Transformer, our approach outperforms all the CNN-based architectures and achieves performance comparable to transformer-based architectures. Our method ranks 2nd and 4th with the Cambridge Landmarks and 7Scenes datasets, respectively. In addition, for augmented domains not encountered during training, our approach significantly outperforms the MS-transformer. Furthermore, it is shown that our domain adaptive framework achieves better performance than the single branch model trained with the identical CNN backbone with all instances of the unseen distribution.