论文标题
迈向值得信赖的AI开发:支持可验证主张的机制
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
论文作者
论文摘要
随着最近人工智能(AI)进展的浪潮,人们对AI系统的大规模影响的认识越来越大,并认识到,工业和学术界的现有法规和规范不足以确保负责AI的发展。为了使AI开发人员从负责任地建立AI的系统用户,客户,民间社会,政府和其他利益相关者那里获得信任,他们将需要对他们负责的可验证索赔。在给定组织之外的人还需要有效的手段来审查此类主张。该报告表明,不同的利益相关者可以采取各种步骤来提高对AI系统及其相关开发过程的索赔的可验证性,并重点是提供有关AI系统的安全,安全,公平和隐私保护的证据。我们为此目的分析了十种机制 - 跨国机构,软件和硬件 - 并提出旨在实施,探索或改善这些机制的建议。
With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.