论文标题

无限学习:从总体稳态观察中学习马尔可夫链

Infinity Learning: Learning Markov Chains from Aggregate Steady-State Observations

论文作者

Gao, Jianfei, Zahran, Mohamed A., Sheoran, Amit, Fahmy, Sonia, Ribeiro, Bruno

论文摘要

我们考虑学习参数连续时间马尔可夫链(CTMC)序列模型的任务,而没有序列示例,其中训练数据完全由综合稳态统计数据组成。使问题更加困难,我们假设我们希望预测的状态在培训数据中没有观察到。具体而言,鉴于CTMC的过渡速率和一些已知的过渡速率的参数模型,我们希望将其稳态分布推断到未观察到的状态。从其稳定状态中学习CTMC的技术障碍是,计算梯度的链条规则将无法在达到稳态所需的任意长序列上起作用。为了克服这一优化挑战,我们提出了$ \ infty $ -SGD,这是一种原则上的随机梯度下降方法,它使用随机停滞的估计器来避免稳态计算所需的无限总和,同时学习即使只能观察到CTMC状态的子集。我们将$ \ infty $ -SGD应用于现实世界测试台和合成实验,以展示其准确性,能够在未观察到的条件下(重负荷,在轻载下进行训练)将稳态分布推断为未观察到的状态,并在困难的场景中成功地在现有方法的刺激下扩展失败。

We consider the task of learning a parametric Continuous Time Markov Chain (CTMC) sequence model without examples of sequences, where the training data consists entirely of aggregate steady-state statistics. Making the problem harder, we assume that the states we wish to predict are unobserved in the training data. Specifically, given a parametric model over the transition rates of a CTMC and some known transition rates, we wish to extrapolate its steady state distribution to states that are unobserved. A technical roadblock to learn a CTMC from its steady state has been that the chain rule to compute gradients will not work over the arbitrarily long sequences necessary to reach steady state ---from where the aggregate statistics are sampled. To overcome this optimization challenge, we propose $\infty$-SGD, a principled stochastic gradient descent method that uses randomly-stopped estimators to avoid infinite sums required by the steady state computation, while learning even when only a subset of the CTMC states can be observed. We apply $\infty$-SGD to a real-world testbed and synthetic experiments showcasing its accuracy, ability to extrapolate the steady state distribution to unobserved states under unobserved conditions (heavy loads, when training under light loads), and succeeding in difficult scenarios where even a tailor-made extension of existing methods fails.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源