论文标题

如何回答原因 - 通过心理模型分析评估AI的解释

How to Answer Why -- Evaluating the Explanations of AI Through Mental Model Analysis

论文作者

Schrills, Tim, Franke, Thomas

论文摘要

为了在用户-AI交互的背景下实现最佳的人类系统集成,用户必须对AI的工作方式开发有效的表示非常重要。在与技术系统的大多数日常互动中,用户构建了心理模型(即,系统用于执行给定任务的预期机制的抽象)。如果没有由系统(例如,由自我解释的AI)或其他来源(例如讲师)提供明确的解释,则通常根据经验(即在交互过程中的用户观察)形成心理模型。 The congruence of this mental model and the actual systems functioning is vital, as it is used for assumptions, predictions and consequently for decisions regarding system use.因此,以人为中心的AI研究的关键问题是如何进行有效调查用户的心理模型。本研究的目的是确定用于心理模型分析的合适的引起方法。我们评估了心理模型是否适合作为经验研究方法。另外,集成了认知辅导方法。我们提出了一种模范方法,以以人为本的方式评估可解释的AI方法。

To achieve optimal human-system integration in the context of user-AI interaction it is important that users develop a valid representation of how AI works. In most of the everyday interaction with technical systems users construct mental models (i.e., an abstraction of the anticipated mechanisms a system uses to perform a given task). If no explicit explanations are provided by a system (e.g. by a self-explaining AI) or other sources (e.g. an instructor), the mental model is typically formed based on experiences, i.e. the observations of the user during the interaction. The congruence of this mental model and the actual systems functioning is vital, as it is used for assumptions, predictions and consequently for decisions regarding system use. A key question for human-centered AI research is therefore how to validly survey users' mental models. The objective of the present research is to identify suitable elicitation methods for mental model analysis. We evaluated whether mental models are suitable as an empirical research method. Additionally, methods of cognitive tutoring are integrated. We propose an exemplary method to evaluate explainable AI approaches in a human-centered way.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源