论文标题
局部树突平衡能够学习尖峰神经元网络中的有效表示
Local dendritic balance enables learning of efficient representations in networks of spiking neurons
论文作者
论文摘要
神经网络如何通过局部可塑性机制学会有效地代表复杂而高维的输入?代表学习的经典模型假设输入权重是通过成对的Hebbian样塑性来学习的。在这里,我们表明,成对的HEBBIAN样可塑性仅在不切实际的神经动力学和输入统计数据下起作用。为了克服这些局限性,我们从基于电压依赖性突触可塑性规则的第一原理中得出了学习方案。在这里,抑制学会学会在单个树突室中局部平衡兴奋性输入,从而可以调节兴奋性突触可塑性以学习有效的表示。我们在模拟中证明,该学习方案即使对于复杂,高维和相关的输入以及抑制性传播延迟,在Hebbian样可塑性失败的情况下,该方案也可以强大。我们的结果提出了在体内观察到的树突状兴奋性抑制平衡和电压依赖性突触可塑性之间的直接联系,并表明这两者对于表示学习至关重要。
How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that input weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity only works under unrealistic requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, inhibition learns to locally balance excitatory input in individual dendritic compartments, and thereby can modulate excitatory synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex, high-dimensional and correlated inputs, and with inhibitory transmission delays, where Hebbian-like plasticity fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo, and suggest that both are crucial for representation learning.