论文标题
freesolo:学习无注释的对象细分对象
FreeSOLO: Learning to Segment Objects without Annotations
论文作者
论文摘要
实例细分是一项基本视觉任务,旨在识别和分割图像中的每个对象。但是,它需要昂贵的注释,例如边界框和学习的分割面具。在这项工作中,我们提出了一种完全无监督的学习方法,该方法在没有任何注释的情况下学习了类不足的实例分割。我们提出了FreeSolo,这是一个自制的实例分割框架,构建在简单实例分割方法独奏之上。我们的方法还提出了一种新颖的本地化训练框架,可以以无监督的方式从复杂的场景中发现对象。 FreeSolo在具有挑战性的可可数据集上实现了9.8%的AP_ {50},该数据集甚至优于使用手动注释的几种分割建议方法。我们第一次成功地展示了无监督的类别无关实例分割。 Freesolo的盒子定位明显优于最先进的无监督对象检测/发现方法,可可AP的相对改进约为100%。 Freesolo进一步证明了优势是一种强大的预训练方法,当仅使用5%可可膜进行微调实例分割时,优于最先进的自我监管的预训练方法 +9.8%AP。代码可在以下网址找到:github.com/nvlabs/freesolo
Instance segmentation is a fundamental vision task that aims to recognize and segment each object in an image. However, it requires costly annotations such as bounding boxes and segmentation masks for learning. In this work, we propose a fully unsupervised learning method that learns class-agnostic instance segmentation without any annotations. We present FreeSOLO, a self-supervised instance segmentation framework built on top of the simple instance segmentation method SOLO. Our method also presents a novel localization-aware pre-training framework, where objects can be discovered from complicated scenes in an unsupervised manner. FreeSOLO achieves 9.8% AP_{50} on the challenging COCO dataset, which even outperforms several segmentation proposal methods that use manual annotations. For the first time, we demonstrate unsupervised class-agnostic instance segmentation successfully. FreeSOLO's box localization significantly outperforms state-of-the-art unsupervised object detection/discovery methods, with about 100% relative improvements in COCO AP. FreeSOLO further demonstrates superiority as a strong pre-training method, outperforming state-of-the-art self-supervised pre-training methods by +9.8% AP when fine-tuning instance segmentation with only 5% COCO masks. Code is available at: github.com/NVlabs/FreeSOLO