论文标题

视觉任务进度估算带有机器人控制和计划的外观不变嵌入

Visual Task Progress Estimation with Appearance Invariant Embeddings for Robot Control and Planning

论文作者

Maeda, Guilherme, Väätäinen, Joni, Yoshida, Hironori

论文摘要

完全自主权的挑战之一是拥有一个能够操纵其当前环境以实现另一种环境配置的机器人。本文是朝着这一挑战的一步,重点是对任务的视觉理解。我们的方法训练深层神经网络,以表示图像为可测量的特征,这些特征可用于估计任务的进度(或阶段)。在同一阶段索引下拍摄时,培训使用了许多相同任务的图像。目的是使网络对任务进度的差异敏感,但对图像的外观不敏感。为此,我们的方法建立在时间对抗性网络(TCN)上,仅使用在任务的不同阶段拍摄的离散快照来训练网络。然后,机器人可以通过使用训练有素的网络来识别当前任务的进度并迭代调用运动计划者,直到解决该任务来解决。我们量化了两个模拟环境中网络实现的粒度。在第一个情况下,要检测场景中的对象数量和第二个对象的数量来测量杯子中的颗粒体积。我们的实验利用这种粒度使移动机器人将所需数量的物体移至存储区域并控制杯中的倒入量。

One of the challenges of full autonomy is to have a robot capable of manipulating its current environment to achieve another environment configuration. This paper is a step towards this challenge, focusing on the visual understanding of the task. Our approach trains a deep neural network to represent images as measurable features that are useful to estimate the progress (or phase) of a task. The training uses numerous variations of images of identical tasks when taken under the same phase index. The goal is to make the network sensitive to differences in task progress but insensitive to the appearance of the images. To this end, our method builds upon Time-Contrastive Networks (TCNs) to train a network using only discrete snapshots taken at different stages of a task. A robot can then solve long-horizon tasks by using the trained network to identify the progress of the current task and by iteratively calling a motion planner until the task is solved. We quantify the granularity achieved by the network in two simulated environments. In the first, to detect the number of objects in a scene and in the second to measure the volume of particles in a cup. Our experiments leverage this granularity to make a mobile robot move a desired number of objects into a storage area and to control the amount of pouring in a cup.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源