论文标题

实时机器人运动产生的深层视觉注意力:工具体同化和自适应工具使用的出现

Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use

论文作者

Hiruma, Hyogo, Ito, Hiroshi, Mori, Hiroki, Ogata, Tetsuya

论文摘要

充分感知环境是机器人运动产生的关键因素。尽管引入深度视觉处理模型有助于扩展这种能力,但现有的方法缺乏积极修改感知内容的能力。人类在视觉认知过程中内部表现。本文通过提出一种新的机器人运动生成模型来解决问题,灵感来自人类认知结构。该模型包含了一个由州驱动的主动自上而下的视觉注意模块,该模块获得了可以根据任务状态积极改变目标的注意事项。我们将这种注意力称为基于角色的注意力,因为获得的注意力集中在整个运动中分享连贯作用的目标。该模型是在机器人工具使用任务上训练的,在该任务中,基于角色的注意力分别将机器人抓手和工具视为相同的最终效果,分别在对象拾取和对象拖动运动过程中。这类似于一种称为工具体同化的生物学现象,其中一个人将处理工具视为身体的扩展。结果表明,即使为其提供了未经训练的工具或暴露于实验者的分心,也可以提高模型视觉感知的灵活性。

Sufficiently perceiving the environment is a critical factor in robot motion generation. Although the introduction of deep visual processing models have contributed in extending this ability, existing methods lack in the ability to actively modify what to perceive; humans perform internally during visual cognitive processes. This paper addresses the issue by proposing a novel robot motion generation model, inspired by a human cognitive structure. The model incorporates a state-driven active top-down visual attention module, which acquires attentions that can actively change targets based on task states. We term such attentions as role-based attentions, since the acquired attention directed to targets that shared a coherent role throughout the motion. The model was trained on a robot tool-use task, in which the role-based attentions perceived the robot grippers and tool as identical end-effectors, during object picking and object dragging motions respectively. This is analogous to a biological phenomenon called tool-body assimilation, in which one regards a handled tool as an extension of one's body. The results suggested an improvement of flexibility in model's visual perception, which sustained stable attention and motion even if it was provided with untrained tools or exposed to experimenter's distractions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源