论文标题

ProgressLabeller:用于训练对象以对象3D感知的视觉数据流注释

ProgressLabeller: Visual Data Stream Annotation for Training Object-Centric 3D Perception

论文作者

Chen, Xiaotong, Zhang, Huijie, Yu, Zeren, Lewis, Stanley, Jenkins, Odest Chadwicke

论文摘要

视觉感知任务通常需要大量的标记数据,包括3D姿势和图像空间分割面具。创建此类培训数据集的过程可能很难或耗时,以扩展到一般使用的功效。考虑对刚性对象的姿势估计的任务。在大型公共数据集中接受培训时,基于神经网络的深度网络方法表现出良好的性能。但是,将这些网络适应其他新颖对象,或针对不同环境的现有模型进行微调,需要大量的时间投资才能产生新标记的实例。为此,我们提出了ProgressLabeller作为一种方法,以更有效地以可扩展方式从彩色图像序列中生成大量的6D姿势训练数据。 ProgressLabeller旨在支持透明或半透明的对象,对于此对象,基于深度密集重建的先前方法将失败。我们通过迅速创建一个超过1M样本的数据集来证明ProgressLabeller的有效性,我们将其微调一个最先进的姿势估计网络,以显着提高下游机器人的掌握率。 progressLabeller是https://github.com/huijiezh/progresslabeller的开源。

Visual perception tasks often require vast amounts of labelled data, including 3D poses and image space segmentation masks. The process of creating such training data sets can prove difficult or time-intensive to scale up to efficacy for general use. Consider the task of pose estimation for rigid objects. Deep neural network based approaches have shown good performance when trained on large, public datasets. However, adapting these networks for other novel objects, or fine-tuning existing models for different environments, requires significant time investment to generate newly labelled instances. Towards this end, we propose ProgressLabeller as a method for more efficiently generating large amounts of 6D pose training data from color images sequences for custom scenes in a scalable manner. ProgressLabeller is intended to also support transparent or translucent objects, for which the previous methods based on depth dense reconstruction will fail. We demonstrate the effectiveness of ProgressLabeller by rapidly create a dataset of over 1M samples with which we fine-tune a state-of-the-art pose estimation network in order to markedly improve the downstream robotic grasp success rates. ProgressLabeller is open-source at https://github.com/huijieZH/ProgressLabeller.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源