论文标题

以事件为中心的问答通过对比度学习和可逆事件转换

Event-Centric Question Answering via Contrastive Learning and Invertible Event Transformation

论文作者

Lu, Junru, Tan, Xingwei, Pergola, Gabriele, Gui, Lin, He, Yulan

论文摘要

人类阅读理解通常需要以叙事中的事件语义关系来推理,以中心的问题避开(QA)为代表。为了解决以事件为中心的质量检查,我们提出了一种具有对比性学习和可逆事件转换的新型质量检查模型,请致电Tranclr。我们提出的模型利用可逆转换矩阵将事件的语义向量投影到一个嵌入一个常见的事件嵌入空间,对对比度学习进行训练,从而自然地将事件语义知识注入主流质量质量质量质量质量质量质量质量管道。转换矩阵通过使用事件感知的问题向量进行的事件与答案中发生的事件之间的带注释的事件关系类型进行了微调。与现有的强基线相比,事件语义关系推理(Ester)数据集的实验结果显示,生成和提取设置的实验结果显着改善,在多种答案设置下,令牌级F1得分的增益超过8.4%,精确匹配(EM)的增长率超过3.0%。定性分析揭示了TRANCLR生成的答案的高质量,这表明将事件知识注入质量检查模型学习的可行性。可以在https://github.com/lujunru/tranclr上找到我们的代码和模型。

Human reading comprehension often requires reasoning of event semantic relations in narratives, represented by Event-centric Question-Answering (QA). To address event-centric QA, we propose a novel QA model with contrastive learning and invertible event transformation, call TranCLR. Our proposed model utilizes an invertible transformation matrix to project semantic vectors of events into a common event embedding space, trained with contrastive learning, and thus naturally inject event semantic knowledge into mainstream QA pipelines. The transformation matrix is fine-tuned with the annotated event relation types between events that occurred in questions and those in answers, using event-aware question vectors. Experimental results on the Event Semantic Relation Reasoning (ESTER) dataset show significant improvements in both generative and extractive settings compared to the existing strong baselines, achieving over 8.4% gain in the token-level F1 score and 3.0% gain in Exact Match (EM) score under the multi-answer setting. Qualitative analysis reveals the high quality of the generated answers by TranCLR, demonstrating the feasibility of injecting event knowledge into QA model learning. Our code and models can be found at https://github.com/LuJunru/TranCLR.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源