论文标题
光谱域中的机器学习
Machine learning in spectral domain
论文作者
论文摘要
通过合适的优化协议调整现有链接的权重,通常在节点空间中训练深神网络。我们在这里提出了一种杰出的新方法,该方法将学习过程固定在互惠空间上。具体而言,该培训在光谱域上行动,并试图修改直接空间中转移操作员的特征值和特征向量。所提出的方法是延展性的,可以量身定制以返回线性或非线性分类器。调整特征值时,在冻结特征向量条目时会产生优于使用标准方法{\ it限制}获得的表演,从而具有相同数量的自由参数的操作。调整特征值实际上是对神经网络进行全球培训的相对应的,该程序促进了有效信息处理所依赖的(分别抑制)集体模式(分别抑制)集体模式。这与通常的学习方法不同,该方法实现了与成对链接相关的权重的局部调制。有趣的是,光谱学习仅限于特征值返回预测权重的分布,该分布接近在直接空间中训练神经网络时获得的,而对要调整参数的限制没有限制。基于上述内容,据推测,与传统的机器学习方案结合使用,还可以使用与特征值绑定的光谱学习来预训练深度神经网络。将特征向量更改为不同的非正交基础,改变了直接空间中网络的拓扑,因此可以将光谱学习策略导出到其他框架中,例如储层计算。
Deep neural networks are usually trained in the space of the nodes, by adjusting the weights of existing links via suitable optimization protocols. We here propose a radically new approach which anchors the learning process to reciprocal space. Specifically, the training acts on the spectral domain and seeks to modify the eigenvalues and eigenvectors of transfer operators in direct space. The proposed method is ductile and can be tailored to return either linear or non-linear classifiers. Adjusting the eigenvalues, when freezing the eigenvectors entries, yields performances which are superior to those attained with standard methods {\it restricted} to a operate with an identical number of free parameters. Tuning the eigenvalues correspond in fact to performing a global training of the neural network, a procedure which promotes (resp. inhibits) collective modes on which an effective information processing relies. This is at variance with the usual approach to learning which implements instead a local modulation of the weights associated to pairwise links. Interestingly, spectral learning limited to the eigenvalues returns a distribution of the predicted weights which is close to that obtained when training the neural network in direct space, with no restrictions on the parameters to be tuned. Based on the above, it is surmised that spectral learning bound to the eigenvalues could be also employed for pre-training of deep neural networks, in conjunction with conventional machine-learning schemes. Changing the eigenvectors to a different non-orthogonal basis alters the topology of the network in direct space and thus allows to export the spectral learning strategy to other frameworks, as e.g. reservoir computing.