论文标题
人工智能的人工概念:AI初创公司的机构合规性和抵抗力
Artificial Concepts of Artificial Intelligence: Institutional Compliance and Resistance in AI Startups
论文作者
论文摘要
学者和行业从业人员辩论了如何最好地发展道德人工智能干预措施(AI)。这种干预措施建议公司建立和使用AI工具改变其技术实践,但没有解决有关开发AI的组织和机构环境的关键问题。在本文中,我们在一个研究的领域中,围绕“ AI”的生活作为一种话语概念和组织实践来贡献描述性研究 - 出现了AI初创公司,并着重于企业家面临的组织外压力。利用理论镜头为组织如何变化,我们对在AI早期AI初创公司工作的23位企业家进行了半结构化访谈。我们发现,初创企业中的演员既符合机构压力,又可以抵抗机构压力。我们的分析确定了AI企业家的中心张力:他们经常重视科学完整性和方法论严格;但是,有影响力的外部利益相关者要么缺少技术知识来欣赏企业家对严谨性的重视,要么更专注于业务优先事项。结果,企业家采用了有关AI的炒作营销信息,这些信息与他们的科学价值观不同,但试图在内部保留其合法性。机构压力和组织限制还影响了企业家的建模实践及其对实际或即将进行的法规的反应。最后,我们讨论了如何将这些压力用作有效干预伦理AI的杠杆作用。
Scholars and industry practitioners have debated how to best develop interventions for ethical artificial intelligence (AI). Such interventions recommend that companies building and using AI tools change their technical practices, but fail to wrangle with critical questions about the organizational and institutional context in which AI is developed. In this paper, we contribute descriptive research around the life of "AI" as a discursive concept and organizational practice in an understudied sphere--emerging AI startups--and with a focus on extra-organizational pressures faced by entrepreneurs. Leveraging a theoretical lens for how organizations change, we conducted semi-structured interviews with 23 entrepreneurs working at early-stage AI startups. We find that actors within startups both conform to and resist institutional pressures. Our analysis identifies a central tension for AI entrepreneurs: they often valued scientific integrity and methodological rigor; however, influential external stakeholders either lacked the technical knowledge to appreciate entrepreneurs' emphasis on rigor or were more focused on business priorities. As a result, entrepreneurs adopted hyped marketing messages about AI that diverged from their scientific values, but attempted to preserve their legitimacy internally. Institutional pressures and organizational constraints also influenced entrepreneurs' modeling practices and their response to actual or impending regulation. We conclude with a discussion for how such pressures could be used as leverage for effective interventions towards building ethical AI.