论文标题

基于证据的解释,以促进AI系统公平性

Evidence-based explanation to promote fairness in AI systems

论文作者

Ferreira, Juliana Jansen, Monteiro, Mateus de Souza

论文摘要

随着人工智能(AI)技术与每个系统的交织在一起,人们正在使用AI来决定其日常活动。在简单的环境中,例如Netflix建议,或在司法场景中的更复杂的环境中,AI是人们决策的一部分。人们做出决定,通常,他们需要向他人或在某些问题上解释自己的决定。在人类专业知识对决策至关重要的情况下,这一点尤其重要。为了通过AI支持来解释他们的决定,人们需要了解AI是该决定的一部分。在考虑公平的方面时,AI在决策过程中的作用变得更加敏感,因为它会影响那些做出最终决定的人的公平性和责任。我们一直在探索一种基于证据的解释设计方法,以“讲述决定的故事”。在该立场论文中,我们使用文献中的公平敏感案例讨论了AI系统的方法。

As Artificial Intelligence (AI) technology gets more intertwined with every system, people are using AI to make decisions on their everyday activities. In simple contexts, such as Netflix recommendations, or in more complex context like in judicial scenarios, AI is part of people's decisions. People make decisions and usually, they need to explain their decision to others or in some matter. It is particularly critical in contexts where human expertise is central to decision-making. In order to explain their decisions with AI support, people need to understand how AI is part of that decision. When considering the aspect of fairness, the role that AI has on a decision-making process becomes even more sensitive since it affects the fairness and the responsibility of those people making the ultimate decision. We have been exploring an evidence-based explanation design approach to 'tell the story of a decision'. In this position paper, we discuss our approach for AI systems using fairness sensitive cases in the literature.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源