论文标题
神经网络中卷积结构的数据驱动的出现
Data-driven emergence of convolutional structure in neural networks
论文作者
论文摘要
利用数据不向导对于在人工和生物神经回路中有效学习至关重要。因此,了解神经网络如何发现能够利用其投入的基础对称性的适当表示,因此对于机器学习和神经科学至关重要。例如,卷积神经网络旨在利用翻译对称性及其功能触发了第一波深度学习成功。但是,直接从具有完全连接的网络的翻译不变数据中学习卷积已被证明难以捉摸。在这里,我们展示了最初完全连接的神经网络解决歧视任务的神经网络如何直接从其输入中学习卷积结构,从而导致局部空间铺设的接受场。这些接收场与经过同一任务训练的卷积网络的过滤器相匹配。通过仔细设计视觉场景的数据模型,我们表明该模式的出现是由输入的非高斯,高阶的局部结构触发的,该结构长期以来一直被认为是自然图像的标志。我们在简单的模型中提供了负责这种现象的模式形成机制的分析和数值表征,并找到了接受场形成与高阶输入相关性的张量分解之间的意外联系。这些结果为各种感觉方式的低级特征探测器的发展提供了新的观点,并为研究高阶统计数据对神经网络学习的影响铺平了道路。
Exploiting data invariances is crucial for efficient learning in both artificial and biological neural circuits. Understanding how neural networks can discover appropriate representations capable of harnessing the underlying symmetries of their inputs is thus crucial in machine learning and neuroscience. Convolutional neural networks, for example, were designed to exploit translation symmetry and their capabilities triggered the first wave of deep learning successes. However, learning convolutions directly from translation-invariant data with a fully-connected network has so far proven elusive. Here, we show how initially fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs, resulting in localised, space-tiling receptive fields. These receptive fields match the filters of a convolutional network trained on the same task. By carefully designing data models for the visual scene, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs, which has long been recognised as the hallmark of natural images. We provide an analytical and numerical characterisation of the pattern-formation mechanism responsible for this phenomenon in a simple model and find an unexpected link between receptive field formation and tensor decomposition of higher-order input correlations. These results provide a new perspective on the development of low-level feature detectors in various sensory modalities, and pave the way for studying the impact of higher-order statistics on learning in neural networks.