论文标题

在增强学习方案中评估机器人动作的类似人类的解释

Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios

论文作者

Cruz, Francisco, Young, Charlotte, Dazeley, Richard, Vamplew, Peter

论文摘要

可解释的人工智能是一个研究领域,试图为自动智能系统提供更透明的透明度。已经使用了解释性,尤其是在强化学习和机器人场景中,以更好地了解机器人决策过程。但是,以前的工作已广泛专注于提供技术解释,而不是非专家最终用户可以更好地理解的技术解释。在这项工作中,我们利用从成功的可能性中构建的类似人类的解释来完成自主机器人执行动作后显示的目标。这些解释旨在由没有人工智能方法经验或很少经验的人来理解这些解释。本文提出了一项用户试验,以研究这些解释的重点是该行动成功实现其目标的概率的解释是否构成了非专家最终用户的适当解释。获得的结果表明,与Q值产生的技术解释相比,非专业参与者的评分机器人的解释侧重于更高的成功概率和差异的差异,并且也赞成反事实解释而不是独立解释。

Explainable artificial intelligence is a research field that tries to provide more transparency for autonomous intelligent systems. Explainability has been used, particularly in reinforcement learning and robotic scenarios, to better understand the robot decision-making process. Previous work, however, has been widely focused on providing technical explanations that can be better understood by AI practitioners than non-expert end-users. In this work, we make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action. These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods. This paper presents a user trial to study whether these explanations that focus on the probability an action has of succeeding in its goal constitute a suitable explanation for non-expert end-users. The results obtained show that non-expert participants rate robot explanations that focus on the probability of success higher and with less variance than technical explanations generated from Q-values, and also favor counterfactual explanations over standalone explanations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源