论文标题
非线性回归的尖峰神经网络
Spiking neural networks for nonlinear regression
论文作者
论文摘要
尖峰神经网络(通常也称为第三代神经网络)具有大量减少记忆和能源消耗的潜力,而能量消耗是传统的第二代神经网络。受人脑无可争议的效率的启发,它们引入了时间和神经元稀疏性,可以通过下一代神经形态硬件来利用。为了打开通往工程应用的途径,我们在连续机械的背景下介绍了这项令人兴奋的技术。但是,尖峰神经网络的性质对回归问题构成了挑战,这在工程科学的建模中经常出现。为了克服这个问题,提出了使用尖峰神经网络回归的框架。特别是,利用尖峰神经元的膜潜力,引入了用于将二进制尖峰列车解码为实数的网络拓扑。由于这种贡献的目的是对这种新方法的简洁介绍,因此得出了几种不同的尖峰神经体系结构,从简单的尖峰进料到复杂的长期短期记忆神经网络。进行了几个针对线性和非线性,历史依赖材料模型的回归的数值实验。与传统神经网络的对应物的直接比较表明,所提出的框架在保持精确性和概括性的同时更加有效。为了重现性,已公开提供所有代码,并促进该新领域的持续增强。
Spiking neural networks, also often referred to as the third generation of neural networks, carry the potential for a massive reduction in memory and energy consumption over traditional, second-generation neural networks. Inspired by the undisputed efficiency of the human brain, they introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware. To open the pathway toward engineering applications, we introduce this exciting technology in the context of continuum mechanics. However, the nature of spiking neural networks poses a challenge for regression problems, which frequently arise in the modeling of engineering sciences. To overcome this problem, a framework for regression using spiking neural networks is proposed. In particular, a network topology for decoding binary spike trains to real numbers is introduced, utilizing the membrane potential of spiking neurons. As the aim of this contribution is a concise introduction to this new methodology, several different spiking neural architectures, ranging from simple spiking feed-forward to complex spiking long short-term memory neural networks, are derived. Several numerical experiments directed towards regression of linear and nonlinear, history-dependent material models are carried out. A direct comparison with counterparts of traditional neural networks shows that the proposed framework is much more efficient while retaining precision and generalizability. All code has been made publicly available in the interest of reproducibility and to promote continued enhancement in this new domain.