论文标题

有限和优化的快速增量期望最大化:非唤醒收敛

Fast Incremental Expectation Maximization for finite-sum optimization: nonasymptotic convergence

论文作者

Fort, Gersende, Gach, P., Moulines, E.

论文摘要

快速增量期望最大化(FIEM)是大型数据集的EM框架的版本。 在本文中,我们首先在em}框架内的{\ em em随机近似中重新铸造和其他增量EM类型算法。然后,我们为期望的融合提供非串扰范围,这是示例数量$ n $和最大迭代数量$ \ kmax $的函数。 我们提出了两种策略,分别使用$ \ kmax = o(n^{2/3}/ε)$和$ \ kmax = o(\ sqrt {n} {n}/ε^{3/2}} $ $ $ \ km km, 步。我们的界限为文献提供了一些改进。首先,他们允许$ \ kmax $缩放为$ \ sqrt {n} $,比$ n^{2/3} $更好,这是迄今为止获得的最佳利率;这是以对公差$ε$的更大依赖性的代价,因此,就示例数量$ n $而言,该控制与中小型准确性相关。其次,对于$ n^{2/3} $ - 速率,数值插图表明,由于对表征当前优化问题的数量进行了优化的步长和边界的选择,因此我们的结果不太保守地选择步骤大小,并在预期中更好地控制了融合。

Fast Incremental Expectation Maximization (FIEM) is a version of the EM framework for large datasets. In this paper, we first recast FIEM and other incremental EM type algorithms in the {\em Stochastic Approximation within EM} framework. Then, we provide nonasymptotic bounds for the convergence in expectation as a function of the number of examples $n$ and of the maximal number of iterations $\kmax$. We propose two strategies for achieving an $ε$-approximate stationary point, respectively with $\kmax = O(n^{2/3}/ε)$ and $\kmax = O(\sqrt{n}/ε^{3/2})$, both strategies relying on a random termination rule before $\kmax$ and on a constant step size in the Stochastic Approximation step. Our bounds provide some improvements on the literature. First, they allow $\kmax$ to scale as $\sqrt{n}$ which is better than $n^{2/3}$ which was the best rate obtained so far; it is at the cost of a larger dependence upon the tolerance $ε$, thus making this control relevant for small to medium accuracy with respect to the number of examples $n$. Second, for the $n^{2/3}$-rate, the numerical illustrations show that thanks to an optimized choice of the step size and of the bounds in terms of quantities characterizing the optimization problem at hand, our results desig a less conservative choice of the step size and provide a better control of the convergence in expectation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源