论文标题
通过损失引导培训的图形学习
Graph Learning with Loss-Guided Training
论文作者
论文摘要
通常,接受随机梯度下降(SGD)训练的ML模型旨在最大程度地减少每个示例的平均损失,并使用在训练过程中{\ em static}的培训示例的分布。近年来的研究表明,从经验和理论上,通过在训练过程中动态调整训练分布的方法可能会出现明显的加速,从而使培训更加专注于损失更高的示例。我们在{\ sc deepwalk}开创的节点嵌入方法的新域中探索{\ em损失引导训练}。这些方法与隐式和大量的正面训练示例一起使用,这些示例是使用输入图上随机步行生成的,因此对于典型的示例选择方法不适合。我们提出了在此框架中允许损失引导训练的计算有效方法。我们对丰富数据集集合的经验评估在进行的总训练和整体计算方面都显示出对基线静态方法的显着加速度。
Classically, ML models trained with stochastic gradient descent (SGD) are designed to minimize the average loss per example and use a distribution of training examples that remains {\em static} in the course of training. Research in recent years demonstrated, empirically and theoretically, that significant acceleration is possible by methods that dynamically adjust the training distribution in the course of training so that training is more focused on examples with higher loss. We explore {\em loss-guided training} in a new domain of node embedding methods pioneered by {\sc DeepWalk}. These methods work with implicit and large set of positive training examples that are generated using random walks on the input graph and therefore are not amenable for typical example selection methods. We propose computationally efficient methods that allow for loss-guided training in this framework. Our empirical evaluation on a rich collection of datasets shows significant acceleration over the baseline static methods, both in terms of total training performed and overall computation.