论文标题

DGCM-NET:密集的几何对应关系匹配网络,用于基于经验的机器人抓紧

DGCM-Net: Dense Geometrical Correspondence Matching Network for Incremental Experience-based Robotic Grasping

论文作者

Patten, Timothy, Park, Kiru, Vincze, Markus

论文摘要

本文介绍了一种通过从经验中学习来抓住新物体的方法。记住成功的尝试,然后用来指导未来的掌握,以便随着时间的流逝而实现更可靠的掌握。为了概括学习的体验来看不见的对象,我们介绍了密集的几何对应关系匹配网络(DGCM-net)。这应用了度量学习,以编码在特征空间附近具有相似几何形状的对象。因此,检索看不见的对象的相关经验是带有编码特征图的最近的邻居搜索。 DGCM-NET还使用观点归一化对象坐标空间重建了3D-3D对应关系,以将掌握配置从检索到的样品转换为看不见的对象。与基线方法相比,我们的方法达到了同等的掌握成功率。但是,当从经验与其掌握建议策略中融合知识时,基准可以显着改善。握住数据集的离线实验突出了对象类内和对象之间概括的能力,以及随着时间的推移而增加经验,并提高了成功率。最后,通过学习与任务相关的掌握,我们的方法可以优先考虑实现对象功能使用的grasps。

This article presents a method for grasping novel objects by learning from experience. Successful attempts are remembered and then used to guide future grasps such that more reliable grasping is achieved over time. To generalise the learned experience to unseen objects, we introduce the dense geometric correspondence matching network (DGCM-Net). This applies metric learning to encode objects with similar geometry nearby in feature space. Retrieving relevant experience for an unseen object is thus a nearest neighbour search with the encoded feature maps. DGCM-Net also reconstructs 3D-3D correspondences using the view-dependent normalised object coordinate space to transform grasp configurations from retrieved samples to unseen objects. In comparison to baseline methods, our approach achieves an equivalent grasp success rate. However, the baselines are significantly improved when fusing the knowledge from experience with their grasp proposal strategy. Offline experiments with a grasping dataset highlight the capability to generalise within and between object classes as well as to improve success rate over time from increasing experience. Lastly, by learning task-relevant grasps, our approach can prioritise grasps that enable the functional use of objects.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源