论文标题

通过自然推论对澄清问题进行排名

Ranking Clarification Questions via Natural Language Inference

论文作者

Kumar, Vaibhav, Raunak, Vikas, Callan, Jamie

论文摘要

鉴于自然语言查询,在实用的自然语言处理系统中,教学机器提出澄清问题是巨大的实用性。这种交互可能有助于填补信息空白,以更好地理解查询。对于对澄清问题进行排名的任务,我们假设确定澄清问题是否与给定帖子中缺少的条目有关(在QA论坛上(例如Stackexchange))可以将其视为自然语言推断的特殊情况(NLI),帖子和最相关的澄清问题与共享的潜在信息或上下文相关的澄清问题。我们通过在NLI和多NLI数据集中微调的Siamese BERT模型中纳入代表性来验证这一假设,并在同时在两个阶段的评估集中,我们最佳性能的模型分别获得了40%和60%的相对性能提高40%和60%(在Precision@1的关键指标上)。 最先进的。

Given a natural language query, teaching machines to ask clarifying questions is of immense utility in practical natural language processing systems. Such interactions could help in filling information gaps for better machine comprehension of the query. For the task of ranking clarification questions, we hypothesize that determining whether a clarification question pertains to a missing entry in a given post (on QA forums such as StackExchange) could be considered as a special case of Natural Language Inference (NLI), where both the post and the most relevant clarification question point to a shared latent piece of information or context. We validate this hypothesis by incorporating representations from a Siamese BERT model fine-tuned on NLI and Multi-NLI datasets into our models and demonstrate that our best performing model obtains a relative performance improvement of 40 percent and 60 percent respectively (on the key metric of Precision@1), over the state-of-the-art baseline(s) on the two evaluation sets of the StackExchange dataset, thereby, significantly surpassing the state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源