论文标题

基于事件的反向传播可以计算尖峰神经网络的确切梯度

Event-Based Backpropagation can compute Exact Gradients for Spiking Neural Networks

论文作者

Wunderlich, Timo C., Pehle, Christian

论文摘要

尖峰神经网络将模拟计算与使用离散尖峰的基于事件的通信相结合。虽然通过使用反向传播算法训练非刺激性人工神经网络的深度学习进步,但将此算法应用于尖峰网络,以前曾被离散的尖峰事件和不连续性的存在所阻碍。这项工作首次通过将伴随方法与适当的部分导数跳跃应用以及适当的部分衍生方法一起应用,用于连续时间尖峰神经网络和一般损失函数的反向传播算法,从而通过离散的峰值事件可以反向传播,而无需近似。该算法(EventProp)在Spike Time上反射错误,以便以基于事件的,时间和空间稀疏的方式计算确切的梯度。我们使用通过EventProp计算的梯度使用基于尖峰时间或基于电压的损耗功能并报告竞争性能的竞争性能来训练阳阳和MNIST数据集的网络。我们的工作支持对尖峰神经网络中基于梯度的学习算法进行严格的研究,并为它们在新颖的脑启发硬件中实施提供了见解。

Spiking neural networks combine analog computation with event-based communication using discrete spikes. While the impressive advances of deep learning are enabled by training non-spiking artificial neural networks using the backpropagation algorithm, applying this algorithm to spiking networks was previously hindered by the existence of discrete spike events and discontinuities. For the first time, this work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss function by applying the adjoint method together with the proper partial derivative jumps, allowing for backpropagation through discrete spike events without approximations. This algorithm, EventProp, backpropagates errors at spike times in order to compute the exact gradient in an event-based, temporally and spatially sparse fashion. We use gradients computed via EventProp to train networks on the Yin-Yang and MNIST datasets using either a spike time or voltage based loss function and report competitive performance. Our work supports the rigorous study of gradient-based learning algorithms in spiking neural networks and provides insights toward their implementation in novel brain-inspired hardware.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源