论文标题
规范可解释的人工智能(XAI)可能会损害消费者
Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers
论文作者
论文摘要
最近的AI算法是黑匣子模型,其决策难以解释。可解释的AI(XAI)是一类方法,通过向客户解释其AI决策来解决缺乏AI的可解释性和信任。普遍的智慧是,通过强制授权完全透明的XAI来调节AI会导致更大的社会福利。我们的论文通过政策制定者的游戏理论模型对这个概念提出挑战,该模型最大化社会福利,在双重垄断竞争中最大化利润的公司和异性消费者。结果表明XAI调节可能是多余的。实际上,要求完全透明的XAI可能会使公司和消费者的情况变得更糟。这揭示了最大化福利和获得可解释的AI输出之间的权衡。我们扩展了有关方法和实质方面的现有文献,并介绍和研究Xai公平的概念,即使在强制性XAI下,也可能无法保证。最后,分别讨论了我们结果对政策制定者和企业的监管和管理意义。
Recent AI algorithms are black box models whose decisions are difficult to interpret. eXplainable AI (XAI) is a class of methods that seek to address lack of AI interpretability and trust by explaining to customers their AI decisions. The common wisdom is that regulating AI by mandating fully transparent XAI leads to greater social welfare. Our paper challenges this notion through a game theoretic model of a policy-maker who maximizes social welfare, firms in a duopoly competition that maximize profits, and heterogenous consumers. The results show that XAI regulation may be redundant. In fact, mandating fully transparent XAI may make firms and consumers worse off. This reveals a tradeoff between maximizing welfare and receiving explainable AI outputs. We extend the existing literature on method and substantive fronts, and we introduce and study the notion of XAI fairness, which may be impossible to guarantee even under mandatory XAI. Finally, the regulatory and managerial implications of our results for policy-makers and businesses are discussed, respectively.