论文标题

迭代性VAE作为分布概括的预测性大脑模型

Iterative VAE as a predictive brain model for out-of-distribution generalization

论文作者

Boutin, Victor, Zerroug, Aimen, Jung, Minju, Serre, Thomas

论文摘要

我们将训练数据超出新颖,分发,图像降解的能力是灵长类动物视觉的标志。预测大脑以预测性编码网络(PCN)为例,已成为神经计算的突出神经科学理论。由变异自动编码器(VAE)在机器学习中取得的成功的动机,我们严格地得出了PCN和VAE之间的对应关系。这促使我们将VAE(IVAE)的迭代扩展视为PCN的合理变异扩展。我们进一步证明,IVAE将其推广到分布变化明显优于PCN和VAE。此外,我们提出了一种可以针对人类心理物理数据进行测试的单个样本可识别性的新颖量度。总体而言,我们希望这项工作将激发人们对IVAES的兴趣,这是神经科学建模的新方向。

Our ability to generalize beyond training data to novel, out-of-distribution, image degradations is a hallmark of primate vision. The predictive brain, exemplified by predictive coding networks (PCNs), has become a prominent neuroscience theory of neural computation. Motivated by the recent successes of variational autoencoders (VAEs) in machine learning, we rigorously derive a correspondence between PCNs and VAEs. This motivates us to consider iterative extensions of VAEs (iVAEs) as plausible variational extensions of the PCNs. We further demonstrate that iVAEs generalize to distributional shifts significantly better than both PCNs and VAEs. In addition, we propose a novel measure of recognizability for individual samples which can be tested against human psychophysical data. Overall, we hope this work will spur interest in iVAEs as a promising new direction for modeling in neuroscience.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源