论文标题

大型语言模型的构图语义解析

Compositional Semantic Parsing with Large Language Models

论文作者

Drozdov, Andrew, Schärli, Nathanael, Akyürek, Ekin, Scales, Nathan, Song, Xinying, Chen, Xinyun, Bousquet, Olivier, Zhou, Denny

论文摘要

当呈现新任务时,人类可以在构图上推理。先前的研究表明,适当的提示技术使大型语言模型(LLM)可以解决人工构图概括任务,例如扫描。在这项工作中,我们在更现实的语义解析任务中确定了更大的词汇,并完善这些提示技术来解决这些挑战。我们的最佳方法是基于最小的提示:它使用基于提示的句法解析分解问题,然后使用此分解来选择适当的示例并顺序生成语义分析。这种方法使我们可以为CFQ设置新的最新技术,同时仅需要传统方法使用的培训数据的1%。由于我们方法的一般性,我们希望类似的努力将在其他任务和领域中带来新的结果,尤其是对于知识密集型应用程序。

Humans can reason compositionally when presented with new tasks. Previous research shows that appropriate prompting techniques enable large language models (LLMs) to solve artificial compositional generalization tasks such as SCAN. In this work, we identify additional challenges in more realistic semantic parsing tasks with larger vocabulary and refine these prompting techniques to address them. Our best method is based on least-to-most prompting: it decomposes the problem using prompting-based syntactic parsing, then uses this decomposition to select appropriate exemplars and to sequentially generate the semantic parse. This method allows us to set a new state of the art for CFQ while requiring only 1% of the training data used by traditional approaches. Due to the general nature of our approach, we expect similar efforts will lead to new results in other tasks and domains, especially for knowledge-intensive applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源