论文标题

自我监督的单眼深度估计:解决边缘的问题

Self-Supervised Monocular Depth Estimation: Solving the Edge-Fattening Problem

论文作者

Chen, Xingyu, Zhang, Ruonan, Jiang, Ji, Wang, Yan, Li, Ge, Li, Thomas H.

论文摘要

自我监督的单眼深度估计(MDE)模型普遍遭受了臭名昭著的边缘问题。作为广泛的度量学习策略,Triplet损失在许多计算机视觉应用中都取得了很大的成功。在本文中,我们重新设计了MDE中基于补丁的三胞胎损失,以减轻无处不在的边缘腐烂问题。我们显示了MDE中RAW三胞胎损失的两个缺点,并展示了我们的问题驱动的重新设计。首先,我们提出一分钟。基于操作员的策略适用于所有负面样本,以防止表现出良好的负面因素掩盖了边缘腐烂的负面因素。其次,我们将锚阳性距离和锚点阴性距离与原始三重态分开,该距离直接优化了阳性,而没有与负面的任何相互效应。广泛的实验表明,这两个小型重新设计的组合可以实现前所未有的结果:我们强大而多功能的三重态损失不仅使我们的模型优于以前的所有SOTA,而且还可以为大量现有模型提供大量的性能提升,同时根本不引入额外的推论计算。

Self-supervised monocular depth estimation (MDE) models universally suffer from the notorious edge-fattening issue. Triplet loss, as a widespread metric learning strategy, has largely succeeded in many computer vision applications. In this paper, we redesign the patch-based triplet loss in MDE to alleviate the ubiquitous edge-fattening issue. We show two drawbacks of the raw triplet loss in MDE and demonstrate our problem-driven redesigns. First, we present a min. operator based strategy applied to all negative samples, to prevent well-performing negatives sheltering the error of edge-fattening negatives. Second, we split the anchor-positive distance and anchor-negative distance from within the original triplet, which directly optimizes the positives without any mutual effect with the negatives. Extensive experiments show the combination of these two small redesigns can achieve unprecedented results: Our powerful and versatile triplet loss not only makes our model outperform all previous SoTA by a large margin, but also provides substantial performance boosts to a large number of existing models, while introducing no extra inference computation at all.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源