论文标题
学习率课程
Learning Rate Curriculum
论文作者
论文摘要
大多数课程学习方法都需要一种方法来通过难度对数据样本进行分类,这通常很麻烦。在这项工作中,我们提出了一种新颖的课程学习方法,称为学习率课程(LERAC),该方法利用神经网络的每个层的学习率使用不同的学习率,以在初始培训时期创建数据不足的课程。更具体地说,LERAC将更高的学习率分配给更接近输入的神经层,随着层与输入的距离越远,逐渐降低了学习率。在第一次训练迭代期间,学习率在各个速度上提高,直到它们都达到相同的价值为止。从这一点开始,神经模型像往常一样受到训练。这会创建一个模型级课程学习策略,该策略不需要难以对示例进行排序,并且与任何神经网络都兼容,无论体系结构如何,都会产生更高的性能水平。我们对来自计算机视觉(CIFAR-10,CIFAR-100,Tiny Imagenet,Imagenet-200,Food-101,utkface,Pascal VOC)的12个数据集进行了全面实验Yolov5),经过反复(LSTM)和变压器(CVT,BERT,SEPTR)体系结构。我们将我们的方法与常规培训制度以及通过平滑(CBS)(一种最先进的数据无关课程学习方法)进行比较。与CBS不同,在所有数据集和模型中,我们对标准培训制度的绩效提高是一致的。此外,我们在训练时间方面非常超过CBS(与LERAC的标准培训制度相比,没有额外的成本)。我们的代码可免费获得:https://github.com/croitorualin/lerac。
Most curriculum learning methods require an approach to sort the data samples by difficulty, which is often cumbersome to perform. In this work, we propose a novel curriculum learning approach termed Learning Rate Curriculum (LeRaC), which leverages the use of a different learning rate for each layer of a neural network to create a data-agnostic curriculum during the initial training epochs. More specifically, LeRaC assigns higher learning rates to neural layers closer to the input, gradually decreasing the learning rates as the layers are placed farther away from the input. The learning rates increase at various paces during the first training iterations, until they all reach the same value. From this point on, the neural model is trained as usual. This creates a model-level curriculum learning strategy that does not require sorting the examples by difficulty and is compatible with any neural network, generating higher performance levels regardless of the architecture. We conduct comprehensive experiments on 12 data sets from the computer vision (CIFAR-10, CIFAR-100, Tiny ImageNet, ImageNet-200, Food-101, UTKFace, PASCAL VOC), language (BoolQ, QNLI, RTE) and audio (ESC-50, CREMA-D) domains, considering various convolutional (ResNet-18, Wide-ResNet-50, DenseNet-121, YOLOv5), recurrent (LSTM) and transformer (CvT, BERT, SepTr) architectures. We compare our approach with the conventional training regime, as well as with Curriculum by Smoothing (CBS), a state-of-the-art data-agnostic curriculum learning approach. Unlike CBS, our performance improvements over the standard training regime are consistent across all data sets and models. Furthermore, we significantly surpass CBS in terms of training time (there is no additional cost over the standard training regime for LeRaC). Our code is freely available at: https://github.com/CroitoruAlin/LeRaC.