论文标题
线性储层动力学中的输入到状态表示
Input-to-State Representation in linear reservoirs dynamics
论文作者
论文摘要
由于训练简单性和近似性能,储层计算是一种流行的设计反复神经网络的方法。这些网络的复发部分未经训练(例如,通过梯度下降),使它们吸引了一个从动力学系统到神经科学的大型研究人员进行分析研究。但是,即使在简单的线性案例中,这些网络的工作原理也不完全了解,它们的设计通常是由启发式方法驱动的。提出了对此类网络动力学的新分析,该分析使研究人员可以使用可控矩阵表达状态进化。这样的矩阵编码网络动力学的显着特征;特别是,它的等级代表了网络内存能力的输入不足度量。使用所提出的方法,可以比较不同的储层架构,并解释为什么循环拓扑会取得了从业人员验证的有利结果。
Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-indepedent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favourable results as verified by practitioners.