论文标题

提示注射:固定输入的参数化

Prompt Injection: Parameterization of Fixed Inputs

论文作者

Choi, Eunbi, Jo, Yongrae, Jang, Joel, Seo, Minjoon

论文摘要

最近的作品表明,将提示附加到输入方面有效,可以在调节语言模型(LM)上执行特定任务。但是,提示在推理过程中始终包含在输入文本中,从而产生了大量的计算和内存开销。同样,目前尚无直接方法使用提示,其提示比LMS的最大输入长度更长,而不会在推理过程中产生额外费用。我们提出了提示注射(PI),这是一种将提示注入LM参数的新颖表述,是将固定提示附加到输入的有效替代方案。我们表明,在具有长期固定提示的情况下,PI的总拖鞋效率可能比以前的方法高280倍。我们进一步探讨了PI的方法论,并通过角色依赖性的对话,语义解析和零射击学习显示了有希望的结果。通过这些探索,我们表明PI可以成为调节语言模型的有前途的方向,尤其是在具有较长和固定提示的情况下。

Recent works have shown that attaching prompts to the input is effective at conditioning Language Models (LM) to perform specific tasks. However, prompts are always included in the input text during inference, thus incurring substantial computational and memory overhead. Also, there is currently no straightforward method of utilizing prompts that are longer than the maximum input length of the LMs without incurring additional costs during inference. We propose Prompt Injection (PI), a novel formulation of injecting the prompt into the parameters of an LM to be an efficient alternative to attaching fixed prompts to the input. We show that in scenarios with long fixed prompts, PI can be up to 280 times more efficient in terms of total FLOPs than previous approaches. We further explore methodologies for PI and show promising results in persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions. Through these explorations, we show that PI can be a promising direction for conditioning language models, especially in scenarios with long and fixed prompts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源