论文标题

“节制的多样性和不确定性”是多语言数量转移的数据选择的关键

"Diversity and Uncertainty in Moderation" are the Key to Data Selection for Multilingual Few-shot Transfer

论文作者

Kumar, Shanu, Dandapat, Sandipan, Choudhury, Monojit

论文摘要

几乎没有射击转移通常显示出比零射传输〜\ cite {lauscher2020zero}的大幅增长,这在完全监督和无监督的学习方法之间用于多语言预审预周化的基于模型的系统,这实际上是一个有用的权衡。本文探讨了各种选择注释数据的策略,这些数据可能会导致更好的几次转移。所提出的方法依赖于多种措施,例如使用$ n $ gram语言模型,预测性熵和梯度嵌入。我们提出了一种用于序列标记任务的损失嵌入方法,该方法诱导多样性和与梯度嵌入相似的不确定性采样。评估并比较了提出的数据选择策略,以使用多达20种语言的POS标签,NER和NLI任务进行比较。我们的实验表明,基于梯度和损失嵌入的策略始终优于随机数据选择基准,并且随着零拍传输的初始性能而变化。此外,即使模型使用原始特定任务特定标记的训练数据进行了零拍传输的较低比例,该方法也显示出改进的相似趋势。

Few-shot transfer often shows substantial gain over zero-shot transfer~\cite{lauscher2020zero}, which is a practically useful trade-off between fully supervised and unsupervised learning approaches for multilingual pretrained model-based systems. This paper explores various strategies for selecting data for annotation that can result in a better few-shot transfer. The proposed approaches rely on multiple measures such as data entropy using $n$-gram language model, predictive entropy, and gradient embedding. We propose a loss embedding method for sequence labeling tasks, which induces diversity and uncertainty sampling similar to gradient embedding. The proposed data selection strategies are evaluated and compared for POS tagging, NER, and NLI tasks for up to 20 languages. Our experiments show that the gradient and loss embedding-based strategies consistently outperform random data selection baselines, with gains varying with the initial performance of the zero-shot transfer. Furthermore, the proposed method shows similar trends in improvement even when the model is fine-tuned using a lower proportion of the original task-specific labeled training data for zero-shot transfer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源