论文标题
了解先验知识的预培训
Informed Pre-Training on Prior Knowledge
论文作者
论文摘要
当培训数据稀缺时,纳入其他先验知识可以帮助学习过程。尽管通常在其他大型数据集中进行了预先训练的权重的神经网络通常是很常见的,但是对更简洁的知识形式进行了预训练。在本文中,我们提出了一种新颖的知情机器学习方法,并建议先进行先验知识进行培训。正式的知识表示,例如图形或方程式首先转化为小型凝结的知识原型数据集。我们表明,对此类知识原型进行了明智的预训练(i)加快了学习过程,(ii)提高了没有足够的培训数据的政权中的概括能力,并且(iii)提高了模型的鲁棒性。分析模型的哪些部分受原型影响最大,这表明改进来自通常代表高级特征的更深层。这证实了知情的预训练确实可以转移语义知识。这是一种新颖的效果,它表明基于知识的预培训对现有方法具有额外的补充优势。
When training data is scarce, the incorporation of additional prior knowledge can assist the learning process. While it is common to initialize neural networks with weights that have been pre-trained on other large data sets, pre-training on more concise forms of knowledge has rather been overlooked. In this paper, we propose a novel informed machine learning approach and suggest to pre-train on prior knowledge. Formal knowledge representations, e.g. graphs or equations, are first transformed into a small and condensed data set of knowledge prototypes. We show that informed pre-training on such knowledge prototypes (i) speeds up the learning processes, (ii) improves generalization capabilities in the regime where not enough training data is available, and (iii) increases model robustness. Analyzing which parts of the model are affected most by the prototypes reveals that improvements come from deeper layers that typically represent high-level features. This confirms that informed pre-training can indeed transfer semantic knowledge. This is a novel effect, which shows that knowledge-based pre-training has additional and complementary strengths to existing approaches.