论文标题

域概括的顺序学习

Sequential Learning for Domain Generalization

论文作者

Li, Da, Yang, Yongxin, Song, Yi-Zhe, Hospedales, Timothy

论文摘要

在本文中,我们提出了一个用于域泛化(DG)的顺序学习框架,该框架的问题是训练模型,该模型可以通过设计对域转移。已经提出了各种DG方法,具有不同的动机直觉,但通常会优化域概括的单个步骤 - 对一组域进行训练并互相推广。我们的顺序学习灵感来自终生学习的启发,在这种学习中,积累的经验意味着学习$ n^{th} $变得比$ 1^{st} $的东西更容易。在DG中,这意味着遇到一系列域和每个步骤训练,以最大程度地提高下一个域的性能。然后,域$ n $的性能取决于以前的$ n-1 $学习问题。因此,通过序列进行反向传播意味着不仅针对下一个域,而且为所有以下域优化性能。与现有方法相比,对所有此类域序列的培训为基础DG学习者提供了更大的“实践”,从而提高了真正的测试域的性能。可以针对不同的基本DG算法实例化此策略,但我们专注于其在最近提出的元学习域概括(MLDG)上的应用。我们表明,对于MLDG,它会导致一种易于实现和快速算法,可在各种DG基准上提供一致的性能提高。

In this paper we propose a sequential learning framework for Domain Generalization (DG), the problem of training a model that is robust to domain shift by design. Various DG approaches have been proposed with different motivating intuitions, but they typically optimize for a single step of domain generalization -- training on one set of domains and generalizing to one other. Our sequential learning is inspired by the idea lifelong learning, where accumulated experience means that learning the $n^{th}$ thing becomes easier than the $1^{st}$ thing. In DG this means encountering a sequence of domains and at each step training to maximise performance on the next domain. The performance at domain $n$ then depends on the previous $n-1$ learning problems. Thus backpropagating through the sequence means optimizing performance not just for the next domain, but all following domains. Training on all such sequences of domains provides dramatically more `practice' for a base DG learner compared to existing approaches, thus improving performance on a true testing domain. This strategy can be instantiated for different base DG algorithms, but we focus on its application to the recently proposed Meta-Learning Domain generalization (MLDG). We show that for MLDG it leads to a simple to implement and fast algorithm that provides consistent performance improvement on a variety of DG benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源