论文标题
使用位移场预测单眼深度估计中的尖锐而准确的闭塞边界
Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields
论文作者
论文摘要
目前的深度图图从单眼图像预测的方法倾向于预测输入图像中遮挡边界的光滑,局部差的轮廓。这是不幸的,因为遮挡边界是识别对象的重要提示,正如我们所显示的,可能会导致一种从场景重建中发现新对象的方法。为了改善预测的深度图,最近的方法依赖于各种形式的过滤或预测添加剂残留深度图来完善第一个估计值。相反,我们学会预测,给定某些重建方法预测的深度图,一个2D位移字段能够将闭塞边界周围重新样本像素重新置换为更清晰的重建。我们的方法可以以端到端的可训练方式应用于任何深度估计方法的输出。为了进行评估,我们手动注释了流行的NYUV2-DEPTH数据集的测试拆分中的所有图像中的遮挡边界。我们表明,我们的方法改善了我们可以评估的所有最新单眼估计方法的闭塞边界的定位,而不会降低其余图像的深度准确性。
Current methods for depth map prediction from monocular images tend to predict smooth, poorly localized contours for the occlusion boundaries in the input image. This is unfortunate as occlusion boundaries are important cues to recognize objects, and as we show, may lead to a way to discover new objects from scene reconstruction. To improve predicted depth maps, recent methods rely on various forms of filtering or predict an additive residual depth map to refine a first estimate. We instead learn to predict, given a depth map predicted by some reconstruction method, a 2D displacement field able to re-sample pixels around the occlusion boundaries into sharper reconstructions. Our method can be applied to the output of any depth estimation method, in an end-to-end trainable fashion. For evaluation, we manually annotated the occlusion boundaries in all the images in the test split of popular NYUv2-Depth dataset. We show that our approach improves the localization of occlusion boundaries for all state-of-the-art monocular depth estimation methods that we could evaluate, without degrading the depth accuracy for the rest of the images.