论文标题
当AUC遇到DRO时:优化部分AUC,以通过非凸线收敛保证进行深度学习
When AUC meets DRO: Optimizing Partial AUC for Deep Learning with Non-Convex Convergence Guarantee
论文作者
论文摘要
在本文中,我们提出了适用于深度学习的单向和双向部分AUC和双向部分AUC(PAUC)最大化的系统和高效的方法。我们通过使用分布强大的优化(DRO)来定义每个单独的积极数据的损失,提出了PAUC替代目标的新公式。我们考虑了两种DRO的公式,其中一种是基于条件 - 价值 - 风险(CVAR),它产生了PAUC的非平滑但精确的估计器,另一个基于KL差异正则DRO,可产生不可天性但平滑(软)PAUC的估计器。对于单向和双向PAUC最大化,我们提出了两种算法,并证明了它们分别优化两种配方的收敛性。实验证明了所提出的算法对PAUC最大化的有效性,以对各种数据集进行深度学习。
In this paper, we propose systematic and efficient gradient-based methods for both one-way and two-way partial AUC (pAUC) maximization that are applicable to deep learning. We propose new formulations of pAUC surrogate objectives by using the distributionally robust optimization (DRO) to define the loss for each individual positive data. We consider two formulations of DRO, one of which is based on conditional-value-at-risk (CVaR) that yields a non-smooth but exact estimator for pAUC, and another one is based on a KL divergence regularized DRO that yields an inexact but smooth (soft) estimator for pAUC. For both one-way and two-way pAUC maximization, we propose two algorithms and prove their convergence for optimizing their two formulations, respectively. Experiments demonstrate the effectiveness of the proposed algorithms for pAUC maximization for deep learning on various datasets.