论文标题
用于正规化知识转移的元功能学习的框架
A Framework of Meta Functional Learning for Regularising Knowledge Transfer
论文作者
论文摘要
机器学习分类器的功能在很大程度上取决于可用培训数据的规模,并受到数据筛选学习任务过于拟合的模型的限制。为了解决这个问题,这项工作提出了一个新颖的元功能学习框架(MFL),通过从数据丰富的任务中获得可通用的功能模型,同时将知识传输转移到数据筛选任务。 MFL计算可通用到不同学习任务的功能正则化的元知识,在有限标记的数据上的功能培训可以促进更多的歧视功能。基于此框架,我们制定了具有原型(MFL-P)的MFL:MFL的三种变体,该变体通过辅助原型学习功能,复合MFL(COMMFL)从功能空间和代表空间中转移知识,以及通过迭代更新(MFL-IU)从MFL转移的知识转移的知识转移的知识转移,从而在知识中进行定期转移,从而将知识转移到知识中,从而在知识中逐步转移知识。此外,我们将这些变体概括为从二进制分类器到多级分类器的知识转移正则化。在两个几次学习方案,几乎没有射击学习(FSL)和跨域少数学习(CD-FSL)上进行了广泛的实验,表明用于知识传递正规化的元功能学习可以改善FSL分类器。
Machine learning classifiers' capability is largely dependent on the scale of available training data and limited by the model overfitting in data-scarce learning tasks. To address this problem, this work proposes a novel framework of Meta Functional Learning (MFL) by meta-learning a generalisable functional model from data-rich tasks whilst simultaneously regularising knowledge transfer to data-scarce tasks. The MFL computes meta-knowledge on functional regularisation generalisable to different learning tasks by which functional training on limited labelled data promotes more discriminative functions to be learned. Based on this framework, we formulate three variants of MFL: MFL with Prototypes (MFL-P) which learns a functional by auxiliary prototypes, Composite MFL (ComMFL) that transfers knowledge from both functional space and representational space, and MFL with Iterative Updates (MFL-IU) which improves knowledge transfer regularisation from MFL by progressively learning the functional regularisation in knowledge transfer. Moreover, we generalise these variants for knowledge transfer regularisation from binary classifiers to multi-class classifiers. Extensive experiments on two few-shot learning scenarios, Few-Shot Learning (FSL) and Cross-Domain Few-Shot Learning (CD-FSL), show that meta functional learning for knowledge transfer regularisation can improve FSL classifiers.