论文标题

仔细研究蒙版语言模型中的语言知识:美国英语中相对条款的情况

A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English

论文作者

Mosbach, Marius, Degaetano-Ortlieb, Stefania, Krielke, Marie-Pauline, Abdullah, Badr M., Klakow, Dietrich

论文摘要

基于变形金刚的语言模型在各种任务上达到了高性能,但是我们仍然缺乏对他们学习和依赖的语言知识的理解。我们评估了三个模型(Bert,Roberta和Albert),通过句子级探测,诊断案例和蒙版的预测任务来测试他们的语法和语义知识。我们专注于相对条款(以美国英语)为复杂的现象,需要上下文信息和先行识别才能解决。基于自然主义数据集,探测表明,这三个模型确实捕获了有关语法性的语言知识,从而达到了高性能。但是,考虑到细粒度的语言知识的诊断病例​​和掩盖预测任务的评估显示出明显的模型特定弱点,尤其是在语义知识上,强烈影响模型的表现。我们的结果强调了(a)模型比较在评估任务中的重要性以及(b)建立模型绩效的主张以及他们在纯粹基于探测的评估之外捕获的语言知识。

Transformer-based language models achieve high performance on various tasks, but we still lack understanding of the kind of linguistic knowledge they learn and rely on. We evaluate three models (BERT, RoBERTa, and ALBERT), testing their grammatical and semantic knowledge by sentence-level probing, diagnostic cases, and masked prediction tasks. We focus on relative clauses (in American English) as a complex phenomenon needing contextual information and antecedent identification to be resolved. Based on a naturalistic dataset, probing shows that all three models indeed capture linguistic knowledge about grammaticality, achieving high performance. Evaluation on diagnostic cases and masked prediction tasks considering fine-grained linguistic knowledge, however, shows pronounced model-specific weaknesses especially on semantic knowledge, strongly impacting models' performance. Our results highlight the importance of (a)model comparison in evaluation task and (b) building up claims of model performance and the linguistic knowledge they capture beyond purely probing-based evaluations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源