论文标题
针对反事实解释的基于人类认知水平的实验设计(XAI)
Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations (XAI)
论文作者
论文摘要
可解释的人工智能(XAI)最近引起了人们的兴趣,因为许多人工智能(AI)从业人员和开发人员被迫合理地使基于AI的系统的工作方式合理化。几十年来,大多数XAI系统都是基于知识或专家系统的。这些系统对解释的技术描述进行了推理,而对用户的认知能力几乎没有考虑。 XAI研究的重点似乎转向了一种更务实的解释方法,以更好地理解。认知科学研究可能会对XAI进步的广泛领域进行评估,这是用户知识和反馈,这对于XAI系统评估至关重要。为此,我们提出了一个框架,以试验以不同认知水平的理由来生成和评估解释。在这方面,我们采用了Bloom的分类法,这是一种广泛接受的模型,用于评估用户的认知能力。我们利用反事实解释作为用户反馈所包含的解释媒介,以验证每个认知水平上对解释的理解水平,并相应地即兴地说明生成方法。
Explainable Artificial Intelligence (XAI) has recently gained a swell of interest, as many Artificial Intelligence (AI) practitioners and developers are compelled to rationalize how such AI-based systems work. Decades back, most XAI systems were developed as knowledge-based or expert systems. These systems assumed reasoning for the technical description of an explanation, with little regard for the user's cognitive capabilities. The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding. An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback, which are essential for XAI system evaluation. To this end, we propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding. In this regard, we adopt Bloom's taxonomy, a widely accepted model for assessing the user's cognitive capability. We utilize the counterfactual explanations as an explanation-providing medium encompassed with user feedback to validate the levels of understanding about the explanation at each cognitive level and improvise the explanation generation methods accordingly.