论文标题
人工智能忠诚度:一种使利益相关者利益保持一致的新范式
AI loyalty: A New Paradigm for Aligning Stakeholder Interests
论文作者
论文摘要
当我们咨询医生,律师或财务顾问时,我们通常认为他们是为了我们的最大利益。但是,当代表我们行动的人工智能(AI)系统时,我们应该假设什么? AI助手,例如Alexa,Siri,Google和Cortana等AI助手的早期示例已经是消费者与网络上信息之间的关键接口,并且用户通常依靠类似的AI驱动系统来采取自动化操作或提供信息。从表面上讲,这种系统似乎是根据用户兴趣行动的。但是,许多AI系统都具有嵌入式利益冲突的设计,其作用以巧妙的方式使创作者(或资助者)以牺牲用户为代价。为了解决这个问题,在本文中,我们介绍了AI忠诚度的概念。 AI系统忠于他们旨在最小化和透明,利益冲突的程度,并以优先考虑用户利益的方式行事。经过适当的设计,这种系统可能具有相当大的功能和竞争力 - 更不用说道德 - 相对于那些没有的优势。忠诚的AI产品对最终用户具有明显的吸引力,并可以促进AI开发人员和客户的长期利益的一致性。为此,我们建议您评估AI系统是否足够透明地对利益冲突,并以忠于用户的方式行事,并认为在技术设计过程中应考虑AI忠诚度以及AI伦理的其他重要价值,例如公平,问责性隐私和公平。我们讨论了一系列机制,从纯市场力量到强大的监管框架,可以支持将AI忠诚度纳入各种未来的AI系统。
When we consult with a doctor, lawyer, or financial advisor, we generally assume that they are acting in our best interests. But what should we assume when it is an artificial intelligence (AI) system that is acting on our behalf? Early examples of AI assistants like Alexa, Siri, Google, and Cortana already serve as a key interface between consumers and information on the web, and users routinely rely upon AI-driven systems like these to take automated actions or provide information. Superficially, such systems may appear to be acting according to user interests. However, many AI systems are designed with embedded conflicts of interests, acting in ways that subtly benefit their creators (or funders) at the expense of users. To address this problem, in this paper we introduce the concept of AI loyalty. AI systems are loyal to the degree that they are designed to minimize, and make transparent, conflicts of interest, and to act in ways that prioritize the interests of users. Properly designed, such systems could have considerable functional and competitive - not to mention ethical - advantages relative to those that do not. Loyal AI products hold an obvious appeal for the end-user and could serve to promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is sufficiently transparent about conflicts of interest, and acting in a manner that is loyal to the user, and argue that AI loyalty should be considered during the technological design process alongside other important values in AI ethics such as fairness, accountability privacy, and equity. We discuss a range of mechanisms, from pure market forces to strong regulatory frameworks, that could support incorporation of AI loyalty into a variety of future AI systems.