论文标题

多元化:使用不同数据的仿射不变深度预测

DiverseDepth: Affine-invariant Depth Prediction Using Diverse Data

论文作者

Yin, Wei, Wang, Xinlong, Shen, Chunhua, Liu, Yifan, Tian, Zhi, Xu, Songcen, Sun, Changming, Renyin, Dou

论文摘要

我们提出了一种用单眼图像进行深度估计的方法,该方法可以预测到不同场景的高质量深度,直到仿射转换,从而保留场景的准确形状。预测度量深度的先前方法通常仅适用于特定场景。相比之下,学习相对深度(更接近或更深入的信息)可以更好地概括,而无法恢复现场准确的几何形状的代价。在这项工作中,我们提出了一种解决这一难题的数据集和方法,旨在预测准确的深度,直至具有对各种场景的良好概括的仿射转化。首先,我们构建了一个大规模且多样化的数据集,称为各种场景深度数据集(多样化),该数据集具有广泛的场景和前景内容。与以前的学习目标相比,即学习度量深度或相对深度,我们建议使用我们的多样化数据集学习仿生的深度,以确保场景的概括和高质量的几何形状。此外,为了有效地在复杂数据集上训练该模型,我们提出了一种多色学习方法。实验表明,我们的方法的表现优于8个数据集上的先前方法,并具有零拍测试设置的大幅度,这表明了学到的模型对不同场景的出色概括能力。具有预测深度的重建点云表明我们的方法可以恢复高质量的3D形状。代码和数据集可用:https://tinyurl.com/diversedepth

We present a method for depth estimation with monocular images, which can predict high-quality depth on diverse scenes up to an affine transformation, thus preserving accurate shapes of a scene. Previous methods that predict metric depth often work well only for a specific scene. In contrast, learning relative depth (information of being closer or further) can enjoy better generalization, with the price of failing to recover the accurate geometric shape of the scene. In this work, we propose a dataset and methods to tackle this dilemma, aiming to predict accurate depth up to an affine transformation with good generalization to diverse scenes. First we construct a large-scale and diverse dataset, termed Diverse Scene Depth dataset (DiverseDepth), which has a broad range of scenes and foreground contents. Compared with previous learning objectives, i.e., learning metric depth or relative depth, we propose to learn the affine-invariant depth using our diverse dataset to ensure both generalization and high-quality geometric shapes of scenes. Furthermore, in order to train the model on the complex dataset effectively, we propose a multi-curriculum learning method. Experiments show that our method outperforms previous methods on 8 datasets by a large margin with the zero-shot test setting, demonstrating the excellent generalization capacity of the learned model to diverse scenes. The reconstructed point clouds with the predicted depth show that our method can recover high-quality 3D shapes. Code and dataset are available at: https://tinyurl.com/DiverseDepth

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源