论文标题

机器学习中的优化:分配空间方法

Optimization in Machine Learning: A Distribution Space Approach

论文作者

Cai, Yongqiang, Li, Qianxiao, Shen, Zuowei

论文摘要

我们提出了这样的观点,即在机器学习中遇到的优化问题通常可以解释为最大程度地减少函数空间上的凸功能,但是使用模型参数化引入的非convex约束集。该观察结果使我们能够通过适当的放松来解决此类问题,因为在训练参数上分布空间中的凸优化问题。我们得出了分布空间问题与原始问题之间的一些简单关系,例如分配空间解决方案至少与原始空间中的解决方案一样好。此外,我们基于混合分布来开发一种数值算法,以直接在分布空间中执行近似优化。建立了这种近似值的一致性,并在简单的示例中说明了所提出算法的数值疗效。在理论和实践中,这种表述为机器学习中的大规模优化提供了另一种方法。

We present the viewpoint that optimization problems encountered in machine learning can often be interpreted as minimizing a convex functional over a function space, but with a non-convex constraint set introduced by model parameterization. This observation allows us to repose such problems via a suitable relaxation as convex optimization problems in the space of distributions over the training parameters. We derive some simple relationships between the distribution-space problem and the original problem, e.g. a distribution-space solution is at least as good as a solution in the original space. Moreover, we develop a numerical algorithm based on mixture distributions to perform approximate optimization directly in distribution space. Consistency of this approximation is established and the numerical efficacy of the proposed algorithm is illustrated on simple examples. In both theory and practice, this formulation provides an alternative approach to large-scale optimization in machine learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源