论文标题

纠缠带有的经常性网络体系结构:张力的潜在状态传播和混乱预测

Entanglement-Embedded Recurrent Network Architecture: Tensorized Latent State Propagation and Chaos Forecasting

论文作者

Meng, Xiangyi, Yang, Tong

论文摘要

尽管在理论和现实世界中的应用中,混乱的时间序列序列序列序列的预测远低于理解。传统的统计/ML方法无效地捕获非线性动力学系统中的混乱,尤其是当连续步骤之间的时间差$ΔT$如此之大的时候,$ΔT$如此之大,以至于很可能达到微不足道的局部局部最小值。在这里,我们通过张开细胞状态到状态的传播来介绍一个新的基于长期的长期内存(LSTM)的复发结构,从而保持LSTM的长期记忆特征,同时增强短期非线性复杂性的学习。我们强调的是,通过张力可以最有效地达到混乱的全局最小值,在所有非线性术语(直到某些多项式秩序)均得到明确和加权的处理。通过理论分析和实验结果对我们体系结构的效率和普遍性进行了系统的测试和确认。在我们的设计中,我们已经明确使用了两种不同的多体纠缠结构---矩阵产品状态(MPS)和多尺度纠缠重生ANSATZ(MERA) - 作为物理学启发的张量分解技术,我们从中通常还可以根据MPS的范围,因此,Mera通常还可以按照MPS的范围进行,因此不仅可以通过猜测来确定CHAA的范围,但要确定CHAA的数量,而CHAA的数量也是如此。复杂性---公认的是纠缠熵如何随张量的固定化而变化。

Chaotic time series forecasting has been far less understood despite its tremendous potential in theory and real-world applications. Traditional statistical/ML methods are inefficient to capture chaos in nonlinear dynamical systems, especially when the time difference $Δt$ between consecutive steps is so large that a trivial, ergodic local minimum would most likely be reached instead. Here, we introduce a new long-short-term-memory (LSTM)-based recurrent architecture by tensorizing the cell-state-to-state propagation therein, keeping the long-term memory feature of LSTM while simultaneously enhancing the learning of short-term nonlinear complexity. We stress that the global minima of chaos can be most efficiently reached by tensorization where all nonlinear terms, up to some polynomial order, are treated explicitly and weighted equally. The efficiency and generality of our architecture are systematically tested and confirmed by theoretical analysis and experimental results. In our design, we have explicitly used two different many-body entanglement structures---matrix product states (MPS) and the multiscale entanglement renormalization ansatz (MERA)---as physics-inspired tensor decomposition techniques, from which we find that MERA generally performs better than MPS, hence conjecturing that the learnability of chaos is determined not only by the number of free parameters but also the tensor complexity---recognized as how entanglement entropy scales with varying matricization of the tensor.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源