论文标题

用于关系提取任务的预训练语言模型的下游模型设计

Downstream Model Design of Pre-trained Language Model for Relation Extraction Task

论文作者

Li, Cheng, Tian, Ye

论文摘要

基于深神经网络的监督关系提取方法在最近的信息提取领域起着重要作用。但是,目前,由于存在复杂的关系,他们的表现仍然无法达到良好的水平。另一方面,最近提出的预训练的语言模型(PLM)在与下游任务的模型结合使用时通过微调进行了多种自然语言处理的任务取得了巨大成功。但是,PLM的原始标准任务还不包括关系提取任务。我们认为,PLM也可以用来解决关系提取问题,但是有必要建立一个专门设计的下游任务模型,甚至是损失功能,以处理复杂的关系。在本文中,具有特殊损失功能的新网络体系结构旨在用作PLM的下游模型,以进行监督关系提取。实验表明,我们的方法显着超过了在多个关系提取的多个公共数据集中的当前最佳基线模型。

Supervised relation extraction methods based on deep neural network play an important role in the recent information extraction field. However, at present, their performance still fails to reach a good level due to the existence of complicated relations. On the other hand, recently proposed pre-trained language models (PLMs) have achieved great success in multiple tasks of natural language processing through fine-tuning when combined with the model of downstream tasks. However, original standard tasks of PLM do not include the relation extraction task yet. We believe that PLMs can also be used to solve the relation extraction problem, but it is necessary to establish a specially designed downstream task model or even loss function for dealing with complicated relations. In this paper, a new network architecture with a special loss function is designed to serve as a downstream model of PLMs for supervised relation extraction. Experiments have shown that our method significantly exceeded the current optimal baseline models across multiple public datasets of relation extraction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源