论文标题

基于梯度的神经形态学习在动力学RRAM阵列上

Gradient-based Neuromorphic Learning on Dynamical RRAM Arrays

论文作者

Zhou, Peng, Eshraghian, Jason K., Choi, Dong-Uk, Lu, Wei D., Kang, Sung-Mo

论文摘要

我们介绍了Memprop,即采用基于梯度的学习来培训完全的申请尖峰神经网络(MSNNS)。我们的方法利用固有的设备动力学来触发自然产生的电压尖峰。这些由回忆动力学发出的尖峰本质上是类似物,因此完全可区分,这消除了对峰值神经网络(SNN)文献中普遍存在的替代梯度方法的需求。回忆性神经网络通常将备忘录作为映射脱机培训网络的突触,或者以其他方式依靠关联学习机制来训练Memristive神经元的网络。相反,我们直接将返回传播(BPTT)训练算法直接应用于复活性神经元和突触的模拟香料模型。我们的实现是完全的综合性,因为突触权重和尖峰神经元都集成在电阻RAM(RRAM)阵列上,而无需其他电路来实现尖峰动态,例如模拟数字转换器(ADCS)或阈值比较器。结果,高阶电物质效应被完全利用,以在运行时使用磁性神经元的状态驱动动力学。通过迈向基于非同一梯度的学习,我们在以前报道的几个基准上的轻巧密集的完全MSNN中获得了高度竞争的准确性。

We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs). Our approach harnesses intrinsic device dynamics to trigger naturally arising voltage spikes. These spikes emitted by memristive dynamics are analog in nature, and thus fully differentiable, which eliminates the need for surrogate gradient methods that are prevalent in the spiking neural network (SNN) literature. Memristive neural networks typically either integrate memristors as synapses that map offline-trained networks, or otherwise rely on associative learning mechanisms to train networks of memristive neurons. We instead apply the backpropagation through time (BPTT) training algorithm directly on analog SPICE models of memristive neurons and synapses. Our implementation is fully memristive, in that synaptic weights and spiking neurons are both integrated on resistive RAM (RRAM) arrays without the need for additional circuits to implement spiking dynamics, e.g., analog-to-digital converters (ADCs) or thresholded comparators. As a result, higher-order electrophysical effects are fully exploited to use the state-driven dynamics of memristive neurons at run time. By moving towards non-approximate gradient-based learning, we obtain highly competitive accuracy amongst previously reported lightweight dense fully MSNNs on several benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源