论文标题
技能:大型语言模型的结构化知识输液
SKILL: Structured Knowledge Infusion for Large Language Models
论文作者
论文摘要
大型语言模型(LLMS)在各种自然语言任务上表现出了人类水平的表现。但是,在很大程度上尚未探索它们是否能够更好地从结构化数据(例如知识图)或文本中内化知识。在这项工作中,我们提出了一种将结构化知识注入LLM的方法,直接在知识图的事实三元组(KGS)上训练T5模型。我们表明,在Wikidata kg上预先训练的模型在FreeBaseQA和Wikihop上的T5基准以及Wikidata可回求的Triviaqa和hosterquestions的子集。这些模型以事实三元组进行了预先培训,与包含相同知识的自然语言句子的模型相比。与T5基线相比,经过较小的kg培训,维基莫维斯(Wikimovies)的元素匹配分数提高了3倍。提出的方法具有一个优势,即在策划培训数据时,知识图和文本语料库之间不需要一致。这使我们的方法在使用行业规模的知识图时特别有用。
Large language models (LLMs) have demonstrated human-level performance on a vast spectrum of natural language tasks. However, it is largely unexplored whether they can better internalize knowledge from a structured data, such as a knowledge graph, or from text. In this work, we propose a method to infuse structured knowledge into LLMs, by directly training T5 models on factual triples of knowledge graphs (KGs). We show that models pre-trained on Wikidata KG with our method outperform the T5 baselines on FreebaseQA and WikiHop, as well as the Wikidata-answerable subset of TriviaQA and NaturalQuestions. The models pre-trained on factual triples compare competitively with the ones on natural language sentences that contain the same knowledge. Trained on a smaller size KG, WikiMovies, we saw 3x improvement of exact match score on MetaQA task compared to T5 baseline. The proposed method has an advantage that no alignment between the knowledge graph and text corpus is required in curating training data. This makes our method particularly useful when working with industry-scale knowledge graphs.