论文标题

密集:通过学习密集表示,学习隐藏的马尔可夫模型

DenseHMM: Learning Hidden Markov Models by Learning Dense Representations

论文作者

Sicking, Joachim, Pintz, Maximilian, Akila, Maram, Wirtz, Tim

论文摘要

我们提出了密度 - 对隐藏的马尔可夫模型(HMM)的修改,该模型允许学习隐藏状态和可观察物的密集表示。与标准HMM相比,过渡概率不是原子化,而是通过内核组成的。我们的方法可实现无约束和基于梯度的优化。我们提出了两个利用此功能的优化方案:对Baum-Welch算法的修改和直接的共发生优化。与标准HMM相比,后者是高度可扩展的,并且在经验上没有丧失性能。我们表明,内核的非线性对于表示形式的表现至关重要。像学到的共发生和对数似然的稠密性能在经验上研究了合成和生物医学数据集。

We propose DenseHMM - a modification of Hidden Markov Models (HMMs) that allows to learn dense representations of both the hidden states and the observables. Compared to the standard HMM, transition probabilities are not atomic but composed of these representations via kernelization. Our approach enables constraint-free and gradient-based optimization. We propose two optimization schemes that make use of this: a modification of the Baum-Welch algorithm and a direct co-occurrence optimization. The latter one is highly scalable and comes empirically without loss of performance compared to standard HMMs. We show that the non-linearity of the kernelization is crucial for the expressiveness of the representations. The properties of the DenseHMM like learned co-occurrences and log-likelihoods are studied empirically on synthetic and biomedical datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源