论文标题
用于逆动力学概率学习的变分等级混合物
Variational Hierarchical Mixtures for Probabilistic Learning of Inverse Dynamics
论文作者
论文摘要
随着数据集的增长迅速并且任务变得更加复杂,精心校准的概率回归模型是机器人应用程序中至关重要的学习组件。不幸的是,经典的回归模型通常是具有柔性结构的概率内核机器,该结构不会与数据或确定性且可扩展的自动机优雅地缩放,尽管具有限制性参数形式和不良的正则化。在本文中,我们考虑了一个概率的层次建模范式,该范式结合了两者的好处,即提供计算有效的表示与固有的复杂性正则化。提出的方法是对局部回归技术的概率解释,通过一组局部线性或多项式单元近似非线性函数。重要的是,我们依靠贝叶斯非参数的原则来制定灵活的模型,以使其复杂性适应数据,并可能包含无限数量的组件。我们得出了两种有效的变异推理技术来学习这些表示形式,并强调了分层无限的本地回归模型的优势,例如处理非平滑函数,减轻灾难性遗忘,并启用参数共享和快速预测。最后,我们在大型逆动力学数据集上验证了这种方法,并在现实世界中控制方案中测试了学习模型。
Well-calibrated probabilistic regression models are a crucial learning component in robotics applications as datasets grow rapidly and tasks become more complex. Unfortunately, classical regression models are usually either probabilistic kernel machines with a flexible structure that does not scale gracefully with data or deterministic and vastly scalable automata, albeit with a restrictive parametric form and poor regularization. In this paper, we consider a probabilistic hierarchical modeling paradigm that combines the benefits of both worlds to deliver computationally efficient representations with inherent complexity regularization. The presented approaches are probabilistic interpretations of local regression techniques that approximate nonlinear functions through a set of local linear or polynomial units. Importantly, we rely on principles from Bayesian nonparametrics to formulate flexible models that adapt their complexity to the data and can potentially encompass an infinite number of components. We derive two efficient variational inference techniques to learn these representations and highlight the advantages of hierarchical infinite local regression models, such as dealing with non-smooth functions, mitigating catastrophic forgetting, and enabling parameter sharing and fast predictions. Finally, we validate this approach on large inverse dynamics datasets and test the learned models in real-world control scenarios.