论文标题

将数值推理技能注入语言模型

Injecting Numerical Reasoning Skills into Language Models

论文作者

Geva, Mor, Gupta, Ankit, Berant, Jonathan

论文摘要

已知大型预训练的语言模型(LMS)编码大量语言信息。但是,高级推理技能(例如数值推理)很难仅从语言模型目标中学习。因此,现有的数值推理模型使用了灵活性有限的专业体系结构。在这项工作中,我们表明数值推理可以自动数据生成,因此可以通过生成大量数据并在多任务设置中培训来将该技能注入预训练的LMS。我们表明,预培训我们的模型Genbert在此数据上大大提高了下降的性能(49.3 $ \ rightarrow $ 72.3 f1),达到了与可比尺寸的最先进模型相匹配的性能,同时使用简单且通用的通用编码器编码器培训。此外,Genbert在数学单词问题数据集中迅速概括,同时保持标准RC任务的高性能。每当技能可以自动增加数据时,我们的方法提供了将技能注入大型预训练的LMS的一般食谱。

Large pre-trained language models (LMs) are known to encode substantial amounts of linguistic information. However, high-level reasoning skills, such as numerical reasoning, are difficult to learn from a language-modeling objective only. Consequently, existing models for numerical reasoning have used specialized architectures with limited flexibility. In this work, we show that numerical reasoning is amenable to automatic data generation, and thus one can inject this skill into pre-trained LMs, by generating large amounts of data, and training in a multi-task setup. We show that pre-training our model, GenBERT, on this data, dramatically improves performance on DROP (49.3 $\rightarrow$ 72.3 F1), reaching performance that matches state-of-the-art models of comparable size, while using a simple and general-purpose encoder-decoder architecture. Moreover, GenBERT generalizes well to math word problem datasets, while maintaining high performance on standard RC tasks. Our approach provides a general recipe for injecting skills into large pre-trained LMs, whenever the skill is amenable to automatic data augmentation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源