论文标题
PALT:知识图完成的语言模型的参数-Lite传输
PALT: Parameter-Lite Transfer of Language Models for Knowledge Graph Completion
论文作者
论文摘要
本文介绍了知识图(kg)完成的验证语言模型(LM)的参数流传输学习方法。我们仅在修改原始LM参数的同时调整了一些新参数,而不是修改所有LM参数的Finetuning,而是调整一些新参数。我们通过将kg完成作为“填空”任务,并在原始LMS之上引入一个参数字母编码器来确定这一点。我们表明,通过调谐参数要比填充少得多,LMS非琐事地转移到大多数任务,并通过先前的最新方法达到竞争力。例如,我们通过仅调谐参数的1%来优于kg完成基准上的完全填充方法。代码和数据集可在\ url {https://github.com/yuanyehome/palt}上获得。
This paper presents a parameter-lite transfer learning approach of pretrained language models (LM) for knowledge graph (KG) completion. Instead of finetuning, which modifies all LM parameters, we only tune a few new parameters while keeping the original LM parameters fixed. We establish this via reformulating KG completion as a "fill-in-the-blank" task, and introducing a parameter-lite encoder on top of the original LMs. We show that, by tuning far fewer parameters than finetuning, LMs transfer non-trivially to most tasks and reach competitiveness with prior state-of-the-art approaches. For instance, we outperform the fully finetuning approaches on a KG completion benchmark by tuning only 1% of the parameters. The code and datasets are available at \url{https://github.com/yuanyehome/PALT}.