论文标题

在验证的语言模型中校准事实知识

Calibrating Factual Knowledge in Pretrained Language Models

论文作者

Dong, Qingxiu, Dai, Damai, Song, Yifan, Xu, Jingjing, Sui, Zhifang, Li, Lei

论文摘要

以前的文献证明,验证的语言模型(PLM)可以存储事实知识。但是,我们发现存储在PLM中的事实并不总是正确的。它激发了我们探索一个基本问题:我们如何在不从头开始重新训练的PLM中校准事实知识?在这项工作中,我们提出了一个简单且轻巧的方法Calinet,以实现这一目标。要具体来说,我们首先检测到PLM是否可以通过正确和虚假事实之间的对比分数来学习正确的事实。如果没有,我们然后使用轻巧的方法将新参数添加到特定的事实文本中。知识探测任务的实验显示了校准有效性和效率。此外,通过封闭式问题回答,我们发现经过微调后校准的PLM具有知识概括能力。除了校准性能之外,我们还进一步调查和可视化知识校准机制。

Previous literature has proved that Pretrained Language Models (PLMs) can store factual knowledge. However, we find that facts stored in the PLMs are not always correct. It motivates us to explore a fundamental question: How do we calibrate factual knowledge in PLMs without re-training from scratch? In this work, we propose a simple and lightweight method CaliNet to achieve this goal. To be specific, we first detect whether PLMs can learn the right facts via a contrastive score between right and fake facts. If not, we then use a lightweight method to add and adapt new parameters to specific factual texts. Experiments on the knowledge probing task show the calibration effectiveness and efficiency. In addition, through closed-book question answering, we find that the calibrated PLM possesses knowledge generalization ability after fine-tuning. Beyond the calibration performance, we further investigate and visualize the knowledge calibration mechanism.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源