论文标题

GPT知道谁是谁

What GPT Knows About Who is Who

论文作者

Yang, Xiaohan, Peynetti, Eduardo, Meerman, Vasco, Tanner, Chris

论文摘要

核心解决方案 - 这是理解话语和整个语言的关键任务 - 尚未见证大型语言模型(LLMS)的广泛利益。此外,核心分辨率系统在很大程度上依赖于监督标签,这些标签非常昂贵且难以注释,从而使迅速的工程变得成熟。在本文中,我们介绍了一种基于质量质量质量检查的及时工程方法,并辨别\ textit {Generative},预先训练的LLMS的能力和局限性对COREFERENT分辨率的任务。我们的实验表明,GPT-2和GPT-NEO可以返回有效的答案,但是他们识别核心提及的能力有限且及时敏感,从而导致结果不一致。

Coreference resolution -- which is a crucial task for understanding discourse and language at large -- has yet to witness widespread benefits from large language models (LLMs). Moreover, coreference resolution systems largely rely on supervised labels, which are highly expensive and difficult to annotate, thus making it ripe for prompt engineering. In this paper, we introduce a QA-based prompt-engineering method and discern \textit{generative}, pre-trained LLMs' abilities and limitations toward the task of coreference resolution. Our experiments show that GPT-2 and GPT-Neo can return valid answers, but that their capabilities to identify coreferent mentions are limited and prompt-sensitive, leading to inconsistent results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源