论文标题

以人为本的可解释AI:迈向反思性社会技术方法

Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach

论文作者

Ehsan, Upol, Riedl, Mark O.

论文摘要

解释 - 事后可解释性的一种形式 - 播放在使系统可以随着AI访问而继续增殖的复杂和敏感的社会技术系统方面的工具作用。在本文中,我们将以人为中心的AI(HCXAI)作为一种方法,将人类置于技术设计的中心。它通过考虑价值观,人际动态和人工智能系统的社会位置的相互作用来发展对“谁”的整体理解。特别是,我们倡导一种反思性的社会技术方法。我们通过对非技术最终用户的解释系统进行案例研究来说明HCXAI,该案例显示了技术进步和对人为因素的理解如何共同进化。在案例研究的基础上,我们提出了有关进一步完善我们对“谁”的理解,并扩展超过1比1人类计算机相互作用的开放研究问题。最后,我们建议通过批判技术实践的角度介导的反思性HCXAI范式,并补充了来自HCI的策略,例如价值敏感的设计和参与性设计 - 不仅可以帮助我们理解我们的知识盲点,但它也可以打开新的设计和研究空间。

Explanations--a form of post-hoc interpretability--play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems. In particular, we advocate for a reflective sociotechnical approach. We illustrate HCXAI through a case study of an explanation system for non-technical end-users that shows how technical advancements and the understanding of human factors co-evolve. Building on the case study, we lay out open research questions pertaining to further refining our understanding of "who" the human is and extending beyond 1-to-1 human-computer interactions. Finally, we propose that a reflective HCXAI paradigm-mediated through the perspective of Critical Technical Practice and supplemented with strategies from HCI, such as value-sensitive design and participatory design--not only helps us understand our intellectual blind spots, but it can also open up new design and research spaces.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源