论文标题

使用共处的视觉和触觉传感器进行视觉宣传和本地化

Using Collocated Vision and Tactile Sensors for Visual Servoing and Localization

论文作者

Chaudhury, Arkadeep Narayan, Man, Timothy, Yuan, Wenzhen, Atkeson, Christopher G.

论文摘要

通过将相机与触觉传感器相交来协调接近性和触觉成像可以1)在接触之前提供有用的信息,例如对象姿势估算和视觉上的伺服机器人,机器人的阻塞减少和较高的闭塞和更高的分辨率与头部安装或外部深度摄像机相比,与触觉点和姿势相比,与触觉估算相比,触发器的反射性不佳时,避免了触觉的范围。可能的匹配和3)使用触觉成像进一步完善接触点和对象姿势估计。我们用比标准操作数据集中的大多数对象具有更多表面纹理的对象来证明我们的结果。我们了解到,需要在大量的相机旅行中集成光流,以预测运动方向。最重要的是,我们还了解到,除非可以从相处的摄像机提供合理的先验,否则在对象模型上将触觉图像定位在对象模型上并不能很好地定位触觉图像。

Coordinating proximity and tactile imaging by collocating cameras with tactile sensors can 1) provide useful information before contact such as object pose estimates and visually servo a robot to a target with reduced occlusion and higher resolution compared to head-mounted or external depth cameras, 2) simplify the contact point and pose estimation problems and help tactile sensing avoid erroneous matches when a surface does not have significant texture or has repetitive texture with many possible matches, and 3) use tactile imaging to further refine contact point and object pose estimation. We demonstrate our results with objects that have more surface texture than most objects in standard manipulation datasets. We learn that optic flow needs to be integrated over a substantial amount of camera travel to be useful in predicting movement direction. Most importantly, we also learn that state of the art vision algorithms do not do a good job localizing tactile images on object models, unless a reasonable prior can be provided from collocated cameras.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源