论文标题
倾向于让最终用户参与交互式人类融合人工智能AI公平
Towards Involving End-users in Interactive Human-in-the-loop AI Fairness
论文作者
论文摘要
确保人工智能公平性(AI)对于抵消深远应用中的偏见和歧视很重要。最近的工作已经开始调查人类如何判断公平以及如何支持机器学习(ML)专家,以使其AI模型更公平。我们的作品从一种称为\ emph {解释性调试}的可解释的AI(XAI)方法中汲取灵感,我们的工作探索了设计可解释的互动和交互式的人类界面的设计,这些界面可允许普通的最终用户没有任何技术或域名背景来识别潜在的公平性问题,并可能在贷款决策的背景下解决这些问题。通过与最终用户的研讨会,我们共同设计并实施了一个原型系统,该系统允许最终用户查看为何进行预测,然后将功能上的权重更改为“调试”公平问题。我们通过在线研究评估了该原型系统的使用。为了研究各种人类价值观对全球公平性的含义,我们还探讨了文化维度如何在使用该原型中发挥作用。我们的结果有助于设计界面的设计,以使最终用户通过人类的方法参与判断和解决AI公平性。
Ensuring fairness in artificial intelligence (AI) is important to counteract bias and discrimination in far-reaching applications. Recent work has started to investigate how humans judge fairness and how to support machine learning (ML) experts in making their AI models fairer. Drawing inspiration from an Explainable AI (XAI) approach called \emph{explanatory debugging} used in interactive machine learning, our work explores designing interpretable and interactive human-in-the-loop interfaces that allow ordinary end-users without any technical or domain background to identify potential fairness issues and possibly fix them in the context of loan decisions. Through workshops with end-users, we co-designed and implemented a prototype system that allowed end-users to see why predictions were made, and then to change weights on features to "debug" fairness issues. We evaluated the use of this prototype system through an online study. To investigate the implications of diverse human values about fairness around the globe, we also explored how cultural dimensions might play a role in using this prototype. Our results contribute to the design of interfaces to allow end-users to be involved in judging and addressing AI fairness through a human-in-the-loop approach.