论文标题
部分可观测时空混沌系统的无模型预测
Learning to Answer Multilingual and Code-Mixed Questions
论文作者
论文摘要
人类自然而然地提出的问题(QA)是无缝人工互动中的关键组成部分。它已成为与网络互动的最方便和自然方法之一,并且在语音控制的环境中尤其是可取的。尽管是最古老的研究领域之一,但当前的质量检查系统仍面临处理多语言查询的关键挑战。为了建立一个可以为多语言最终用户服务的人工智能(AI)代理,质量保证系统必须是语言多功能,并为适合多语言环境而定制。质量检查模型的最新进展已使超过人类绩效,这主要是由于大量高质量数据集的可用性。但是,大多数此类注释的数据集创建昂贵,并且仅限于英语,这使得认识到外语的进步挑战。因此,要衡量多语言质量检查系统的类似改进,有必要投资高质量的多语言评估基准。在本文中,我们专注于推进质量检查技术,用于处理多语言环境中的最终用户查询。本文由两个部分组成。在第一部分中,我们探讨了多种语言和称为混合代码的多种语言的新维度。其次,我们提出了一种通过利用多个文档来解决多跳问题生成任务的技术。实验表明,我们的模型在MQA,VQA和语言生成的多个领域上实现了最新的提取,排名和生成任务的最新性能。提出的技术是通用的,可以在各种域和语言中广泛使用以推进质量检查系统。
Question-answering (QA) that comes naturally to humans is a critical component in seamless human-computer interaction. It has emerged as one of the most convenient and natural methods to interact with the web and is especially desirable in voice-controlled environments. Despite being one of the oldest research areas, the current QA system faces the critical challenge of handling multilingual queries. To build an Artificial Intelligent (AI) agent that can serve multilingual end users, a QA system is required to be language versatile and tailored to suit the multilingual environment. Recent advances in QA models have enabled surpassing human performance primarily due to the availability of a sizable amount of high-quality datasets. However, the majority of such annotated datasets are expensive to create and are only confined to the English language, making it challenging to acknowledge progress in foreign languages. Therefore, to measure a similar improvement in the multilingual QA system, it is necessary to invest in high-quality multilingual evaluation benchmarks. In this dissertation, we focus on advancing QA techniques for handling end-user queries in multilingual environments. This dissertation consists of two parts. In the first part, we explore multilingualism and a new dimension of multilingualism referred to as code-mixing. Second, we propose a technique to solve the task of multi-hop question generation by exploiting multiple documents. Experiments show our models achieve state-of-the-art performance on answer extraction, ranking, and generation tasks on multiple domains of MQA, VQA, and language generation. The proposed techniques are generic and can be widely used in various domains and languages to advance QA systems.