论文标题
探索概率软逻辑作为在任务上下文中集成自上而下和自下而上的语言处理的框架
Exploring Probabilistic Soft Logic as a framework for integrating top-down and bottom-up processing of language in a task context
论文作者
论文摘要
该技术报告描述了一种新的原型体系结构,旨在整合非标准语言输入的自上而下和自下而上的分析,其中使用了话语上下文的语义模型来指导对非标准表面形式的分析,包括其自动化的上下文。虽然通常适用架构,但作为架构的具体用例,我们针对的是为了回答阅读理解问题,在其中给出阅读上下文和可能的目标答案的回答,是德国学习者编写的答案的语义知识目标假设。 该体系结构集成了现有的NLP组件,以对八个级别的语言建模进行候选分析,所有这些分析都被分解为原子语句,并使用概率软逻辑(PSL)作为框架连接到大型图形模型中。最大对所得图形模型的后验推断将信念分布分配给候选目标假设。该体系结构的当前版本以通用依赖关系(UD)为基础,作为其对形式级别和抽象含义表示形式(AMR)的表示形式,以表示学习者答案的语义分析以及目标答案提供的上下文信息。这些一般选择将使将体系结构应用于其他任务和其他语言相对简单。
This technical report describes a new prototype architecture designed to integrate top-down and bottom-up analysis of non-standard linguistic input, where a semantic model of the context of an utterance is used to guide the analysis of the non-standard surface forms, including their automated normalization in context. While the architecture is generally applicable, as a concrete use case of the architecture we target the generation of semantically-informed target hypotheses for answers written by German learners in response to reading comprehension questions, where the reading context and possible target answers are given. The architecture integrates existing NLP components to produce candidate analyses on eight levels of linguistic modeling, all of which are broken down into atomic statements and connected into a large graphical model using Probabilistic Soft Logic (PSL) as a framework. Maximum a posteriori inference on the resulting graphical model then assigns a belief distribution to candidate target hypotheses. The current version of the architecture builds on Universal Dependencies (UD) as its representation formalism on the form level and on Abstract Meaning Representations (AMRs) to represent semantic analyses of learner answers and the context information provided by the target answers. These general choices will make it comparatively straightforward to apply the architecture to other tasks and other languages.