论文标题
可计算的人工通用情报
Computable Artificial General Intelligence
论文作者
论文摘要
根据AI安全研究,人工通用情报(AGI)可能会预示我们的灭绝。然而,关于AGI的主张必须依靠数学形式主义 - 我们可以分析或尝试建立的理论代理。 Aixi似乎是唯一支持其行为是最佳的形式主义,这是由于其将压缩用作智力代理的结果。不幸的是,Aixi是无法理解的,并且关于其行为高度主观的主张。我们认为这是因为Aixi将认知形式化为与追求目标的环境孤立发生的认知(笛卡尔二元论)。我们提出了一个替代方案,并得到证明和实验的支持,该替代方案克服了这些问题。将认知科学的研究与AI相结合,我们为解决主观性问题的学习和推理制定模型形式化。这使我们能够为智力(称为弱点)制定不同的代理,该智能解决了无可省力的问题。当无力最大化时,我们证明了最佳行为。该证明是通过比较虚弱和描述长度的实验结果来补充的(在不重新引入主观性的情况下可能是最接近的压缩类似物)。弱点的表现超过了描述长度,这表明这是一个更好的代理。此外,我们表明,如果认知是实施的,那么描述长度的最小化既不是必要的也不足以获得最佳性能,从而破坏了压缩与智能密切相关的观念。但是,关于实施尺度AGI的实施仍然存在公开问题。在短期内,最好将这些结果用于改善现有系统的性能。例如,我们的结果解释了为什么DeepMind的表现引擎能够有效地概括,以及如何通过最大化弱点来复制该性能。
Artificial general intelligence (AGI) may herald our extinction, according to AI safety research. Yet claims regarding AGI must rely upon mathematical formalisms -- theoretical agents we may analyse or attempt to build. AIXI appears to be the only such formalism supported by proof that its behaviour is optimal, a consequence of its use of compression as a proxy for intelligence. Unfortunately, AIXI is incomputable and claims regarding its behaviour highly subjective. We argue that this is because AIXI formalises cognition as taking place in isolation from the environment in which goals are pursued (Cartesian dualism). We propose an alternative, supported by proof and experiment, which overcomes these problems. Integrating research from cognitive science with AI, we formalise an enactive model of learning and reasoning to address the problem of subjectivity. This allows us to formulate a different proxy for intelligence, called weakness, which addresses the problem of incomputability. We prove optimal behaviour is attained when weakness is maximised. This proof is supplemented by experimental results comparing weakness and description length (the closest analogue to compression possible without reintroducing subjectivity). Weakness outperforms description length, suggesting it is a better proxy. Furthermore we show that, if cognition is enactive, then minimisation of description length is neither necessary nor sufficient to attain optimal performance, undermining the notion that compression is closely related to intelligence. However, there remain open questions regarding the implementation of scale-able AGI. In the short term, these results may be best utilised to improve the performance of existing systems. For example, our results explain why Deepmind's Apperception Engine is able to generalise effectively, and how to replicate that performance by maximising weakness.