论文标题
MetAmix:改进了基于插值的一致性正则化的元学习
MetaMix: Improved Meta-Learning with Interpolation-based Consistency Regularization
论文作者
论文摘要
模型 - 敏捷的元学习(MAML)及其变体是流行的很少的分类方法。他们在各种采样的学习任务(也称为情节)中训练初始化器,以便初始化的模型可以迅速适应新任务。但是,当前基于MAML的算法在形成可概括的决策边界方面存在局限性。在本文中,我们提出了一种称为metamix的方法。它在每个情节中生成虚拟特征 - 目标对,以使骨干模型正常。 MetAmix可以与任何基于MAML的算法集成,并学习更好地推广到新任务的决策边界。迷你象征,CUB和FC100数据集的实验表明,当与元转移学习集成时,MetAmix可以改善基于MAML的算法和实现最新结果的性能。
Model-Agnostic Meta-Learning (MAML) and its variants are popular few-shot classification methods. They train an initializer across a variety of sampled learning tasks (also known as episodes) such that the initialized model can adapt quickly to new tasks. However, current MAML-based algorithms have limitations in forming generalizable decision boundaries. In this paper, we propose an approach called MetaMix. It generates virtual feature-target pairs within each episode to regularize the backbone models. MetaMix can be integrated with any of the MAML-based algorithms and learn the decision boundaries generalizing better to new tasks. Experiments on the mini-ImageNet, CUB, and FC100 datasets show that MetaMix improves the performance of MAML-based algorithms and achieves state-of-the-art result when integrated with Meta-Transfer Learning.