论文标题
积极和未标记的学习的集体损失功能
Collective Loss Function for Positive and Unlabeled Learning
论文作者
论文摘要
人们学会在不明确暴露于负面例子的情况下区分班级。相反,传统的机器学习算法通常依赖于负面示例,否则该模型容易崩溃和始终真实的预测。因此,设计学习目标至关重要,该学习目标导致模型融合并公正地进行预测而没有明确的负面信号。在本文中,我们提出了一个共同的损失功能,以仅从积极和未标记的数据(CPU)中学习。从理论上讲,我们从PU学习的设置中引起了损失函数。我们在基准和实际数据集上进行密集实验。结果表明,CPU始终优于当前最新的PU学习方法。
People learn to discriminate between classes without explicit exposure to negative examples. On the contrary, traditional machine learning algorithms often rely on negative examples, otherwise the model would be prone to collapse and always-true predictions. Therefore, it is crucial to design the learning objective which leads the model to converge and to perform predictions unbiasedly without explicit negative signals. In this paper, we propose a Collectively loss function to learn from only Positive and Unlabeled data (cPU). We theoretically elicit the loss function from the setting of PU learning. We perform intensive experiments on the benchmark and real-world datasets. The results show that cPU consistently outperforms the current state-of-the-art PU learning methods.