论文标题
轻巧3D点融合增强单眼深度估计
Boosting Monocular Depth Estimation with Lightweight 3D Point Fusion
论文作者
论文摘要
在本文中,我们建议通过将3D点作为深度指导来增强单眼深度估计。与现有的深度完成方法不同,我们的方法在极度稀疏且分布不平的点云上表现良好,这使其对3D点的来源不可知。我们通过引入一个新型的多尺度3D点融合网络来实现这一目标,该网络既轻巧又有效。我们在两个不同的深度估计问题上证明了它的多功能性,其中3D点以传统的结构从动作和激光雷达获得。在这两种情况下,我们的网络都以最新的深度完成方法的形式执行,并且当仅使用少量点时,同时在参数数量方面更加紧凑时,准确性明显更高。我们表明,我们的方法在准确性和紧凑性方面都优于一些基于当代深度学习的立体声和结构从结构中进行的。
In this paper, we propose enhancing monocular depth estimation by adding 3D points as depth guidance. Unlike existing depth completion methods, our approach performs well on extremely sparse and unevenly distributed point clouds, which makes it agnostic to the source of the 3D points. We achieve this by introducing a novel multi-scale 3D point fusion network that is both lightweight and efficient. We demonstrate its versatility on two different depth estimation problems where the 3D points have been acquired with conventional structure-from-motion and LiDAR. In both cases, our network performs on par with state-of-the-art depth completion methods and achieves significantly higher accuracy when only a small number of points is used while being more compact in terms of the number of parameters. We show that our method outperforms some contemporary deep learning based multi-view stereo and structure-from-motion methods both in accuracy and in compactness.