论文标题
机器学习和不变理论
Machine learning and invariant theory
论文作者
论文摘要
受物理定律的限制启发,地位的机器学习将学习限制为假设类别,其中所有功能在某些小组行动方面都是均等的。不可减至的表示或不变理论通常用于参数化此类功能的空间。在本文中,我们介绍了该主题,并解释了几种方法,以明确化在机器学习应用程序中使用的epivariant函数。特别是,我们阐述了归因于Malgrange的一般过程,以表达在$ G $的作用下是在较大空间上的不变多项式的表征,这些线性空间之间的所有多项式图。该方法还参数化了平滑的模糊地图,因为$ g $是一个紧凑的谎言组。
Inspired by constraints from physical law, equivariant machine learning restricts the learning to a hypothesis class where all the functions are equivariant with respect to some group action. Irreducible representations or invariant theory are typically used to parameterize the space of such functions. In this article, we introduce the topic and explain a couple of methods to explicitly parameterize equivariant functions that are being used in machine learning applications. In particular, we explicate a general procedure, attributed to Malgrange, to express all polynomial maps between linear spaces that are equivariant under the action of a group $G$, given a characterization of the invariant polynomials on a bigger space. The method also parametrizes smooth equivariant maps in the case that $G$ is a compact Lie group.