论文标题
学习紧凑的可推广神经表示支持感知分组
Learning compact generalizable neural representations supporting perceptual grouping
论文作者
论文摘要
视觉科学与深度学习的交汇处的工作开始探索深度卷积网络(DCN)和经常性网络在解决灵长类动物视觉识别和细分基础的感知分组问题方面的功效。在这里,我们扩展了这项工作,以研究DCN解决方案对学习涉及轮廓集成的低级感知分组程序的紧凑性和普遍性。我们介绍了V1NET,这是一种由生物启发的复发单元,在皮质电路中无处不在的侧向连接。 DCN中的前馈卷积层可以用V1NET模块代替,以增强其对感知分组的上下文视觉处理支持。我们将V1NET-DCN的学习效率和准确性与14个精心选择的前馈和复发的神经体系结构(包括最先进的DCN)(包括最先进的DCNS)进行了比较 - 我们在此处介绍的800,000张图像的合成强制性Choice-Choice Courtion集成数据集,以及先前发布的Pathfinder Integrant Integration contiration Contour Contour contermarks。我们通过测量在Markedlong上训练的候选模型的转移学习绩效来测量解决方案,该模型经过微调以学习探路者。我们的结果表明,紧凑的3层V1NET-DCN匹配或优于所有测试比较模型的测试准确性和样品效率,该模型包含5倍至1000倍更容易训练的参数。我们还注意到,V1NET-DCN学习了Markedlong的最紧凑的概括解决方案。 V1NET-DCN的时间动力学的可视化阐明了其可解释的分组计算以求解Markedlong的用法。 V1NET-DCN的紧凑而丰富的表示也使其成为构建设备机视觉算法的有前途的候选人,并有助于更好地了解生物皮质电路。
Work at the intersection of vision science and deep learning is starting to explore the efficacy of deep convolutional networks (DCNs) and recurrent networks in solving perceptual grouping problems that underlie primate visual recognition and segmentation. Here, we extend this line of work to investigate the compactness and generalizability of DCN solutions to learning low-level perceptual grouping routines involving contour integration. We introduce V1Net, a bio-inspired recurrent unit that incorporates lateral connections ubiquitous in cortical circuitry. Feedforward convolutional layers in DCNs can be substituted with V1Net modules to enhance their contextual visual processing support for perceptual grouping. We compare the learning efficiency and accuracy of V1Net-DCNs to that of 14 carefully selected feedforward and recurrent neural architectures (including state-of-the-art DCNs) on MarkedLong -- a synthetic forced-choice contour integration dataset of 800,000 images we introduce here -- and the previously published Pathfinder contour integration benchmarks. We gauged solution generalizability by measuring the transfer learning performance of our candidate models trained on MarkedLong that were fine-tuned to learn PathFinder. Our results demonstrate that a compact 3-layer V1Net-DCN matches or outperforms the test accuracy and sample efficiency of all tested comparison models which contain between 5x and 1000x more trainable parameters; we also note that V1Net-DCN learns the most compact generalizable solution to MarkedLong. A visualization of the temporal dynamics of a V1Net-DCN elucidates its usage of interpretable grouping computations to solve MarkedLong. The compact and rich representations of V1Net-DCN also make it a promising candidate to build on-device machine vision algorithms as well as help better understand biological cortical circuitry.