论文标题

知道您要去哪里:用于参数有效微调的元学习

Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-Tuning

论文作者

Gheini, Mozhdeh, Ma, Xuezhe, May, Jonathan

论文摘要

一种被称为轻量级微调方法的技术系列,通过仅更新一小部分附加参数,同时保留预处理的语言模型的参数,从而促进了参数有效的转移学习。虽然被证明是一种有效的方法,但尚无关于如何以及对下游微调方法的了解以及如何影响训练阶段的研究。在这项工作中,我们表明,考虑微调方法的最终选择可以提高参数有效的微调的性能。通过使用MAML和某些用于我们独特目的的修改的基于优化的元学习,我们为参数有效的微调介绍了预审计的模型,从而在交叉语言NER微调上获得了高达1.7分的增长。我们的消融设置和分析进一步表明,我们在MAML中引入的调整对于获得的收益至关重要。

A recent family of techniques, dubbed lightweight fine-tuning methods, facilitates parameter-efficient transfer learning by updating only a small set of additional parameters while keeping the parameters of the pretrained language model frozen. While proven to be an effective method, there are no existing studies on if and how such knowledge of the downstream fine-tuning approach should affect the pretraining stage. In this work, we show that taking the ultimate choice of fine-tuning method into consideration boosts the performance of parameter-efficient fine-tuning. By relying on optimization-based meta-learning using MAML with certain modifications for our distinct purpose, we prime the pretrained model specifically for parameter-efficient fine-tuning, resulting in gains of up to 1.7 points on cross-lingual NER fine-tuning. Our ablation settings and analyses further reveal that the tweaks we introduce in MAML are crucial for the attained gains.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源