论文标题

功能对比度学习可转移的元代表

Function Contrastive Learning of Transferable Meta-Representations

论文作者

Gondal, Muhammad Waleed, Joshi, Shruti, Rahaman, Nasim, Bauer, Stefan, Wüthrich, Manuel, Schölkopf, Bernhard

论文摘要

元学习算法迅速适应了与培训任务相同的任务分布所绘制的新任务。导致快速适应的机制是对任务基础数据生成过程的推断表示下游预测模型的条件,或\ emph {function}。通过预测模型共同学习了从几个观察到的基础函数的示例中计算得出的\ emph {meta-presententation}。在这项工作中,我们研究了这种联合培训对元代表性转移性的影响。我们的目标是学习对数据中噪声强大的元代表性,并促进解决共享相同基础功能的各种下游任务。为此,我们提出了一种脱钩的编码器方法,以进行监督的元学习,在该方法中,编码器的训练具有对比目标,以找到对基础函数的良好表示。特别是,我们的训练方案是由自我划分信号驱动的,表明两组示例是否源于同一功能。我们对许多合成和现实世界数据集的实验表明,即使以端到端方式对这些基线进行训练,我们从下游性能和噪声稳健性方面获得的表征也优于强大的基准。

Meta-learning algorithms adapt quickly to new tasks that are drawn from the same task distribution as the training tasks. The mechanism leading to fast adaptation is the conditioning of a downstream predictive model on the inferred representation of the task's underlying data generative process, or \emph{function}. This \emph{meta-representation}, which is computed from a few observed examples of the underlying function, is learned jointly with the predictive model. In this work, we study the implications of this joint training on the transferability of the meta-representations. Our goal is to learn meta-representations that are robust to noise in the data and facilitate solving a wide range of downstream tasks that share the same underlying functions. To this end, we propose a decoupled encoder-decoder approach to supervised meta-learning, where the encoder is trained with a contrastive objective to find a good representation of the underlying function. In particular, our training scheme is driven by the self-supervision signal indicating whether two sets of examples stem from the same function. Our experiments on a number of synthetic and real-world datasets show that the representations we obtain outperform strong baselines in terms of downstream performance and noise robustness, even when these baselines are trained in an end-to-end manner.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源