论文标题
一项针对以任务为导向的对话系统中的信念状态跟踪的及时基于迅速的少数学习方法的研究
A Study on Prompt-based Few-Shot Learning Methods for Belief State Tracking in Task-oriented Dialog Systems
论文作者
论文摘要
我们解决了面向任务的对话系统的对话信念状态跟踪(DST)问题。解决此问题的最新方法利用了基于变压器的模型产生了很好的结果。但是,培训这些模型在计算资源和时间方面都是昂贵的。此外,由于培训这些模型所需的广泛注释,收集高质量注释的对话数据集仍然是研究人员的挑战。在预先训练的语言模型和迅速学习的最新成功的驱动下,我们探索了基于迅速的基于对话信念状态跟踪的迅速学习。我们将DST问题作为两阶段的基于2阶段的语言建模任务和训练语言模型,并针对任务进行了培训语言模型,并对其独立和联合性能进行了全面的经验分析。我们证明了DST几次学习中基于及时的方法的潜力,并提供了未来改进的方向。
We tackle the Dialogue Belief State Tracking(DST) problem of task-oriented conversational systems. Recent approaches to this problem leveraging Transformer-based models have yielded great results. However, training these models is expensive, both in terms of computational resources and time. Additionally, collecting high quality annotated dialogue datasets remains a challenge for researchers because of the extensive annotation required for training these models. Driven by the recent success of pre-trained language models and prompt-based learning, we explore prompt-based few-shot learning for Dialogue Belief State Tracking. We formulate the DST problem as a 2-stage prompt-based language modelling task and train language models for both tasks and present a comprehensive empirical analysis of their separate and joint performance. We demonstrate the potential of prompt-based methods in few-shot learning for DST and provide directions for future improvement.