论文标题
评估零弹射设置中多项选择任务的提示
Evaluating Prompts Across Multiple Choice Tasks In a Zero-Shot Setting
论文作者
论文摘要
大型语言模型表明,可以通过自然语言提示来实现令人印象深刻的零射击性能(Radford等,2019; Brown等,2020; Sanh等,2021)。但是,创建有效的提示需要重大的反复试验。 \ textit {提示}问题:及时及时的质量如何影响其性能?为此,我们从各种任务中收集并标准化提示,以与他们不设计的任务一起使用。然后,我们在固定的多项选择数据集中评估这些提示,以定量分析及时的某些属性影响性能。我们发现,包括选择和使用预训练期间未使用的提示可提供重大改进。所有实验和代码均可找到https://github.com/gabeorlanski/zero-sher-shot-cross-task。
Large language models have shown that impressive zero-shot performance can be achieved through natural language prompts (Radford et al., 2019; Brown et al., 2020; Sanh et al., 2021). Creating an effective prompt, however, requires significant trial and error. That \textit{prompts} the question: how do the qualities of a prompt effects its performance? To this end, we collect and standardize prompts from a diverse range of tasks for use with tasks they were not designed for. We then evaluate these prompts across fixed multiple choice datasets for a quantitative analysis of how certain attributes of a prompt affect performance. We find that including the choices and using prompts not used during pre-training provide significant improvements. All experiments and code can be found https://github.com/gabeorlanski/zero-shot-cross-task.