论文标题

预测编码网络中推断和学习的理论框架

A Theoretical Framework for Inference and Learning in Predictive Coding Networks

论文作者

Millidge, Beren, Song, Yuhang, Salvatori, Tommaso, Lukasiewicz, Thomas, Bogacz, Rafal

论文摘要

预测编码(PC)是计算神经科学中的有影响力的理论,该理论认为,皮质通过实施层次结构的预测误差最小化过程来形成无监督的世界模型。 PC网络(PCN)分为两个阶段。首先,更新神经活动以优化网络对外部刺激的反应。其次,更新突触权重以巩固活动中的这种变化 - 一种称为\ emph {前瞻性配置}的算法。 While previous work has shown how in various limits, PCNs can be found to approximate backpropagation (BP), recent work has demonstrated that PCNs operating in this standard regime, which does not approximate BP, nevertheless obtain competitive training and generalization performance to BP-trained networks while outperforming them on tasks such as online, few-shot, and continual learning, where brains are known to excel.尽管有一种有希望的经验表现,但理论上对PCN的特性和动力学在该制度中的理解很少。在本文中,我们对经过预期配置训练的PCN的性质进行了全面的理论分析。我们首先得出有关PCN的推理平衡以及与目标传播(TP)的紧密联系关系的分析结果。其次,我们提供了PCN中学习的理论分析,作为广义期望 - 搭配最大化的一种变体,并使用它来证明PCN与BP损耗函数的关键点的收敛性,从而表明,从理论上讲,在理论上可以实现与BP相同的概括性能,同时保持其独特的优势。

Predictive coding (PC) is an influential theory in computational neuroscience, which argues that the cortex forms unsupervised world models by implementing a hierarchical process of prediction error minimization. PC networks (PCNs) are trained in two phases. First, neural activities are updated to optimize the network's response to external stimuli. Second, synaptic weights are updated to consolidate this change in activity -- an algorithm called \emph{prospective configuration}. While previous work has shown how in various limits, PCNs can be found to approximate backpropagation (BP), recent work has demonstrated that PCNs operating in this standard regime, which does not approximate BP, nevertheless obtain competitive training and generalization performance to BP-trained networks while outperforming them on tasks such as online, few-shot, and continual learning, where brains are known to excel. Despite this promising empirical performance, little is understood theoretically about the properties and dynamics of PCNs in this regime. In this paper, we provide a comprehensive theoretical analysis of the properties of PCNs trained with prospective configuration. We first derive analytical results concerning the inference equilibrium for PCNs and a previously unknown close connection relationship to target propagation (TP). Secondly, we provide a theoretical analysis of learning in PCNs as a variant of generalized expectation-maximization and use that to prove the convergence of PCNs to critical points of the BP loss function, thus showing that deep PCNs can, in theory, achieve the same generalization performance as BP, while maintaining their unique advantages.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源