论文标题

常识推理的对抗培训

Adversarial Training for Commonsense Inference

论文作者

Pereira, Lis, Liu, Xiaodong, Cheng, Fei, Asahara, Masayuki, Kobayashi, Ichiro

论文摘要

我们提出了一种常识性推理(Alice)的对抗性培训算法。我们将小的扰动应用于单词嵌入,并最大程度地降低所得的对抗风险以使模型正常。我们利用两种不同方法的新型组合来估计这些扰动:1)使用真实标签和2)使用模型预测。我们的模型不依赖于任何人力制作的功能,知识库或其他数据集,而是提高了罗伯塔的微调性能,从而在需要常识推理的多个阅读理解数据集中取得了竞争成果。

We propose an AdversariaL training algorithm for commonsense InferenCE (ALICE). We apply small perturbations to word embeddings and minimize the resultant adversarial risk to regularize the model. We exploit a novel combination of two different approaches to estimate these perturbations: 1) using the true label and 2) using the model prediction. Without relying on any human-crafted features, knowledge bases, or additional datasets other than the target datasets, our model boosts the fine-tuning performance of RoBERTa, achieving competitive results on multiple reading comprehension datasets that require commonsense inference.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源