论文标题
触摸和走:从人为收获的视觉和触摸中学习
Touch and Go: Learning from Human-Collected Vision and Touch
论文作者
论文摘要
将触摸与视觉联系起来的能力对于需要与世界上对象进行物理互动的任务至关重要。我们提出了一个带有配对的视觉和触觉数据的数据集,称为Touch and Go,其中人类数据收集器使用触觉传感器在自然环境中探测对象,同时录制以自我为中心的视频。与以前的努力相反,这些工作主要仅限于实验室设置或模拟环境,我们的数据集跨越了大量“野生”对象和场景。为了证明我们的数据集的有效性,我们成功地将其应用于各种任务:1)自我监管的视觉触觉功能学习,2)触觉驱动的图像样式化,即,使对象的视觉外观与给定触觉信号更加一致,而3)预测Visuo-Tactactacile tactactacile tactactacile tactacile tactile Signal of Dactactacile intups的未来框架。
The ability to associate touch with sight is essential for tasks that require physically interacting with objects in the world. We propose a dataset with paired visual and tactile data called Touch and Go, in which human data collectors probe objects in natural environments using tactile sensors, while simultaneously recording egocentric video. In contrast to previous efforts, which have largely been confined to lab settings or simulated environments, our dataset spans a large number of "in the wild" objects and scenes. To demonstrate our dataset's effectiveness, we successfully apply it to a variety of tasks: 1) self-supervised visuo-tactile feature learning, 2) tactile-driven image stylization, i.e., making the visual appearance of an object more consistent with a given tactile signal, and 3) predicting future frames of a tactile signal from visuo-tactile inputs.