论文标题

从黑匣子到对话:将XAI纳入会话代理

From Black Boxes to Conversations: Incorporating XAI in a Conversational Agent

论文作者

Nguyen, Van Bach, Schlötterer, Jörg, Seifert, Christin

论文摘要

可解释的AI(XAI)的目的是设计方法,以提供有关黑盒模型(例如深神经网络)推理过程的见解,以便向人类解释。社会科学研究指出,这种解释应该是对话的,类似于人类到人类的解释。在这项工作中,我们使用包含自然语言理解和发电组成部分的代理的标准设计将如何将XAI纳入对话剂中。我们以XAI问题库为基础,该问题库通过质量控制的释义扩展,以了解用户的信息需求。我们进一步系统地调查了文献,以提供适当的解释方法,这些方法提供了以回答这些问题的信息,并提供了全面的建议列表。我们的工作是用解释代理进行有关机器学习模型的真正自然对话的第一步。 XAI问题的全面列表和相应的解释方法可能会支持其他研究人员提供必要的信息以满足用户的需求。为了促进未来的工作,我们发布了源代码和数据。

The goal of Explainable AI (XAI) is to design methods to provide insights into the reasoning process of black-box models, such as deep neural networks, in order to explain them to humans. Social science research states that such explanations should be conversational, similar to human-to-human explanations. In this work, we show how to incorporate XAI in a conversational agent, using a standard design for the agent comprising natural language understanding and generation components. We build upon an XAI question bank, which we extend by quality-controlled paraphrases, to understand the user's information needs. We further systematically survey the literature for suitable explanation methods that provide the information to answer those questions, and present a comprehensive list of suggestions. Our work is the first step towards truly natural conversations about machine learning models with an explanation agent. The comprehensive list of XAI questions and the corresponding explanation methods may support other researchers in providing the necessary information to address users' demands. To facilitate future work, we release our source code and data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源