论文标题
用于预测和控制的稳定线性动力学系统的记忆效率学习
Memory-Efficient Learning of Stable Linear Dynamical Systems for Prediction and Control
论文作者
论文摘要
从数据中学习稳定的线性动力系统(LDS)涉及创建模型,既可以最大程度地减少重建误差和实施学习表示的稳定性。我们提出了一种用于学习稳定LDSS的新型算法。使用稳定矩阵的最新表征,我们提出了一种优化方法,该方法可确保每个步骤的稳定性,并使用本文中得出的梯度方向迭代地改善重建误差。当应用于带有输入的LDS时,我们的方法与当前学习稳定LDSS的方法相反 - - 更新状态和控制矩阵,扩展解决方案空间并允许具有较低重建误差的模型。我们将算法应用于模拟和实验中,将其应用于各种问题,包括从图像序列学习动态纹理和控制机器人操纵器。与现有方法相比,我们提出的方法在控制绩效方面取得了重建误差和卓越结果的质量提高。另外,与竞争替代方案的O(n^4)相比,它具有更高的记忆效率,具有O(n^2)空间复杂性,因此,当其他方法失败时,将其扩展到更高维度的系统。
Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach---in contrast to current methods for learning stable LDSs---updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an orders-of-magnitude improvement in reconstruction error and superior results in terms of control performance. In addition, it is provably more memory-efficient, with an O(n^2) space complexity compared to O(n^4) of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail.