论文标题
人工智能是愚蠢的,因果推理无法解决
Artificial Intelligence is stupid and causal reasoning won't fix it
论文作者
论文摘要
人工神经网络在各种游戏中都达到了大师甚至超人的表现:从涉及完美信息(例如GO)的游戏到涉及不完美信息的人(例如Starcraft)。来自AI -LAB的技术发展已迎来了整个企业的伴随应用程序 - AI品牌标签迅速变得无处不在。这种广泛的商业部署的推论是,当AI弄错了问题时 - 自动驾驶汽车坠毁;聊天机器人表现出种族主义行为;自动化信用评分流程在性别等方面存在区分 - 通常会有重大的财务,法律和品牌后果,事件成为主要新闻。正如犹大·珀尔(Judea Pearl)所看到的那样,这种错误的根本原因是,“深度学习的所有令人印象深刻的成就相当于曲线拟合”。犹太珍珠建议的是,关键是通过与因果关系相关联取代推理 - 从观察到的现象中推断原因的能力。这一点是由加里·马库斯(Gary Marcus)和欧内斯特·戴维斯(Ernest Davis)在《纽约时报》(New York Times)的最新文章中回应的:“我们需要停止构建计算机系统,这些计算机系统在数据集中发现统计模式越来越好,通常是使用一种被称为深度学习的方法 - 并开始构建大会的计算机系统,从他们的大会开始构建本机的基本概念概念:时间,时间和销售。在本文中,提出1949年吉尔伯特·莱尔(Gilbert Ryle)称之为类别错误的内容,我将为AI错误提供另一种解释:AI机械无法理解因果关系,而是AI Machinery -qua Computitation -qua计算 - 根本无法理解任何东西。
Artificial Neural Networks have reached Grandmaster and even super-human performance across a variety of games: from those involving perfect-information (such as Go) to those involving imperfect-information (such as Starcraft). Such technological developments from AI-labs have ushered concomitant applications across the world of business - where an AI brand tag is fast becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong - an autonomous vehicle crashes; a chatbot exhibits racist behaviour; automated credit scoring processes discriminate on gender etc. - there are often significant financial, legal and brand consequences and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that, 'all the impressive achievements of deep learning amount to just curve fitting'. The key, Judea Pearl suggests, is to replace reasoning by association with causal-reasoning - the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for the New York Times: 'we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets - often using an approach known as Deep Learning - and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality'. In this paper, foregrounding what in 1949 Gilbert Ryle termed a category mistake, I will offer an alternative explanation for AI errors: it is not so much that AI machinery cannot grasp causality, but that AI machinery - qua computation - cannot understand anything at all.