论文标题

基于图像的GRASP验证的低成本机器视觉摄像机的性能评估

Performance Evaluation of Low-Cost Machine Vision Cameras for Image-Based Grasp Verification

论文作者

Nair, Deebul, Pakdaman, Amirhossein, Plöger, Paul G.

论文摘要

GRASP验证对于自动操作机器人是有利的,因为它们为成功完成任务完成的高级计划组件提供了所需的反馈。但是,进行掌握验证的主要障碍是传感器选择。在本文中,我们建议使用机器视觉摄像机进行基于视觉的掌握验证系统,并将验证问题作为图像分类任务提出。机器视觉摄像机由相机和能够在板上深度学习推理的处理单元组成。这些低功率硬件的推断是在数据源附近完成的,从而减少了机器人对集中式服务器的依赖,从而降低了延迟和可靠性。机器视觉摄像机使用不同的神经加速器提供深度学习的推理功能。虽然,从这些摄像机的文档中尚不清楚这些神经加速器对诸如延迟和吞吐量等性能指标的影响。为了系统地基准这些机器视觉摄像机,我们提出了一个参数化的模型生成器,该模型发生器生成卷积神经网络(CNN)的端到端模型。使用这些生成的模型,我们基于两个机器视觉摄像机的延迟和吞吐量,即Jevois A33和Sipeed Maix Bit。我们的实验表明,所选的机器视觉摄像机和深度学习模型可以以每个框架精度为97%的掌握掌握。

Grasp verification is advantageous for autonomous manipulation robots as they provide the feedback required for higher level planning components about successful task completion. However, a major obstacle in doing grasp verification is sensor selection. In this paper, we propose a vision based grasp verification system using machine vision cameras, with the verification problem formulated as an image classification task. Machine vision cameras consist of a camera and a processing unit capable of on-board deep learning inference. The inference in these low-power hardware are done near the data source, reducing the robot's dependence on a centralized server, leading to reduced latency, and improved reliability. Machine vision cameras provide the deep learning inference capabilities using different neural accelerators. Although, it is not clear from the documentation of these cameras what is the effect of these neural accelerators on performance metrics such as latency and throughput. To systematically benchmark these machine vision cameras, we propose a parameterized model generator that generates end to end models of Convolutional Neural Networks(CNN). Using these generated models we benchmark latency and throughput of two machine vision cameras, JeVois A33 and Sipeed Maix Bit. Our experiments demonstrate that the selected machine vision camera and the deep learning models can robustly verify grasp with 97% per frame accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源