论文标题
深度分歧学习
Deep Divergence Learning
论文作者
论文摘要
最近已经沿着两条不同的线扩展了经典的线性度量学习方法:使用神经网络学习数据嵌入数据的深度度量学习方法,以及Bregman Divergence学习方法,用于将学习欧几里得距离扩展到更一般的差异措施,例如分布式上的分布等方面。在本文中,我们介绍了深度的Bregman Diverence,这些分歧基于使用神经网络的学习和参数化功能性的Bregman差异,并统一和扩展了这些现有的工作线。我们特别展示了深度度量学习公式,内核度量学习,马哈拉诺邦度量学习以及对分布的矩匹配功能是在对称环境中作为这些差异的特殊情况而出现的。然后,我们描述了学习一般功能性Bregman分歧的深度学习框架,并在实验中表明,与现有的深度度量学习方法相比,该方法在基准数据集上产生了卓越的性能。我们还讨论了新的应用程序,包括半监督分布聚类问题,以及无监督数据生成的新损失函数。
Classical linear metric learning methods have recently been extended along two distinct lines: deep metric learning methods for learning embeddings of the data using neural networks, and Bregman divergence learning approaches for extending learning Euclidean distances to more general divergence measures such as divergences over distributions. In this paper, we introduce deep Bregman divergences, which are based on learning and parameterizing functional Bregman divergences using neural networks, and which unify and extend these existing lines of work. We show in particular how deep metric learning formulations, kernel metric learning, Mahalanobis metric learning, and moment-matching functions for comparing distributions arise as special cases of these divergences in the symmetric setting. We then describe a deep learning framework for learning general functional Bregman divergences, and show in experiments that this method yields superior performance on benchmark datasets as compared to existing deep metric learning approaches. We also discuss novel applications, including a semi-supervised distributional clustering problem, and a new loss function for unsupervised data generation.