论文标题
评估不需要访问任何培训或测试数据的自然语言处理模型
Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data
论文作者
论文摘要
选择合适的体系结构参数和训练超参数对于增强机器学习(ML)模型性能至关重要。最近的一些经验研究对神经网络(NNS)进行了大规模相关分析,以寻找有效的\ emph {概括指标},可以指导这种类型的模型选择。有效指标通常预期与测试性能密切相关。在本文中,我们通过检查基于泛化的模型选择以下目标来扩展先前的分析:(i)关注自然语言处理(NLP)任务,因为先前的工作主要集中于计算机视觉(CV)任务; (ii)考虑直接预测\ emph {test Orror}的指标,而不是\ emph {概括gap}; (iii)探索不需要访问数据即可计算的指标。从这些目标中,我们能够使用概括指标从拥抱面上的大型变压器上提供第一个模型选择结果。我们的分析考虑了(i)在不同情况下训练的数百种变形金刚,其中我们从系统上改变了数据,模型大小和优化超参数的量,(ii)来自八个拥抱面NLP模型的八个家族的51个预处理的变压器,包括GPT2,BERT,BERT,BERT等,以及(III),总计28个现有的和新颖的常规化和新颖的常规化Metrics。尽管具有利基的地位,但我们发现,从重尾(HT)角度来看,指标在NLP任务中特别有用,比其他更受欢迎的指标表现出更强的相关性。为了进一步检查这些指标,我们将依靠幂律(PL)频谱分布的先前表述扩展到指数(EXP)和指数截断的权力法(E-TPL)家族。
Selecting suitable architecture parameters and training hyperparameters is essential for enhancing machine learning (ML) model performance. Several recent empirical studies conduct large-scale correlational analysis on neural networks (NNs) to search for effective \emph{generalization metrics} that can guide this type of model selection. Effective metrics are typically expected to correlate strongly with test performance. In this paper, we expand on prior analyses by examining generalization-metric-based model selection with the following objectives: (i) focusing on natural language processing (NLP) tasks, as prior work primarily concentrates on computer vision (CV) tasks; (ii) considering metrics that directly predict \emph{test error} instead of the \emph{generalization gap}; (iii) exploring metrics that do not need access to data to compute. From these objectives, we are able to provide the first model selection results on large pretrained Transformers from Huggingface using generalization metrics. Our analyses consider (I) hundreds of Transformers trained in different settings, in which we systematically vary the amount of data, the model size and the optimization hyperparameters, (II) a total of 51 pretrained Transformers from eight families of Huggingface NLP models, including GPT2, BERT, etc., and (III) a total of 28 existing and novel generalization metrics. Despite their niche status, we find that metrics derived from the heavy-tail (HT) perspective are particularly useful in NLP tasks, exhibiting stronger correlations than other, more popular metrics. To further examine these metrics, we extend prior formulations relying on power law (PL) spectral distributions to exponential (EXP) and exponentially-truncated power law (E-TPL) families.