论文标题
虚弱监督的3D场景语义细分的主动自我训练
Active Self-Training for Weakly Supervised 3D Scene Semantic Segmentation
论文作者
论文摘要
由于准备训练点云的标记数据是一个耗时的过程,因此已经引入了弱监督的方法,仅从一小部分数据中学习。这些方法通常是基于对比损失的学习,同时自动从一组稀疏的用户宣布标签中得出每个点伪标签。在本文中,我们的主要观察结果是,选择要注释的样品的选择与这些样品如何用于培训一样重要。因此,我们引入了一种对3D场景进行弱监督分割的方法,该方法将自我训练与主动学习结合在一起。主动学习选择注释点可能会导致训练有素的模型的性能改进,而自我培训则可以有效利用用户提供的标签来学习模型。我们证明我们的方法会导致一种有效的方法,该方法可以改进场景细分对以前的作品和基准,同时仅需要少量的用户注释。
Since the preparation of labeled data for training semantic segmentation networks of point clouds is a time-consuming process, weakly supervised approaches have been introduced to learn from only a small fraction of data. These methods are typically based on learning with contrastive losses while automatically deriving per-point pseudo-labels from a sparse set of user-annotated labels. In this paper, our key observation is that the selection of what samples to annotate is as important as how these samples are used for training. Thus, we introduce a method for weakly supervised segmentation of 3D scenes that combines self-training with active learning. The active learning selects points for annotation that likely result in performance improvements to the trained model, while the self-training makes efficient use of the user-provided labels for learning the model. We demonstrate that our approach leads to an effective method that provides improvements in scene segmentation over previous works and baselines, while requiring only a small number of user annotations.