论文标题

元学习所需的支持设置多样性吗?

Is Support Set Diversity Necessary for Meta-Learning?

论文作者

Setlur, Amrith, Li, Oscar, Smith, Virginia

论文摘要

元学习是一个流行的学习框架,其中有限的数据通过在多个几次学习任务上培训来产生算法。对于分类问题,这些任务通常是通过从一部分子集中抽样少量支持和查询示例来构建的。虽然传统的观点是任务多样性应该提高元学习的绩效,但在这项工作中,我们发现证据相反:我们建议对传统的元学习方法进行修改,在这些方法中,我们将支持集跨越跨任务固定,从而减少了任务多样性。令人惊讶的是,我们发现这种修改不仅不会导致不利影响,而且几乎总是可以提高各种数据集和元学习方法的性能。我们还提供了一些初步分析来了解这一现象。我们的工作用于:(i)更仔细地研究支持集构建对元学习问题的效果,(ii)提出了一个简单,一般和竞争性的基线,用于几次学习。

Meta-learning is a popular framework for learning with limited data in which an algorithm is produced by training over multiple few-shot learning tasks. For classification problems, these tasks are typically constructed by sampling a small number of support and query examples from a subset of the classes. While conventional wisdom is that task diversity should improve the performance of meta-learning, in this work we find evidence to the contrary: we propose a modification to traditional meta-learning approaches in which we keep the support sets fixed across tasks, thus reducing task diversity. Surprisingly, we find that not only does this modification not result in adverse effects, it almost always improves the performance for a variety of datasets and meta-learning methods. We also provide several initial analyses to understand this phenomenon. Our work serves to: (i) more closely investigate the effect of support set construction for the problem of meta-learning, and (ii) suggest a simple, general, and competitive baseline for few-shot learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源