论文标题

关于在现实设置中的分离表示的转移

On the Transfer of Disentangled Representations in Realistic Settings

论文作者

Dittadi, Andrea, Träuble, Frederik, Locatello, Francesco, Wüthrich, Manuel, Agrawal, Vaibhav, Winther, Ole, Bauer, Stefan, Schölkopf, Bernhard

论文摘要

学习有意义的表示数据生成过程的基本结构被认为在机器学习中至关重要。虽然发现分解的表示形式可用于诸如抽象推理和公平分类之类的各种任务,但它们的可扩展性和现实影响仍然值得怀疑。我们引入了一个新的高分辨率数据集,其中包含1M模拟图像和超过1,800个带注释的现实世界图像。与以前的工作相反,该新数据集具有相关性,一个复杂的基础结构,并允许评估转移到未看到的模拟和真实世界的设置,其中编码器i)保持分布或ii)不在分布中。我们提出了新的体系结构,以扩展分离的表示学习,以实现现实的高分辨率设置,并对该数据集的分离表示形式进行大规模的经验研究。我们观察到,解散是分布(OOD)任务绩效的良好预测指标。

Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning. While disentangled representations were found to be useful for diverse tasks such as abstract reasoning and fair classification, their scalability and real-world impact remain questionable. We introduce a new high-resolution dataset with 1M simulated images and over 1,800 annotated real-world images of the same setup. In contrast to previous work, this new dataset exhibits correlations, a complex underlying structure, and allows to evaluate transfer to unseen simulated and real-world settings where the encoder i) remains in distribution or ii) is out of distribution. We propose new architectures in order to scale disentangled representation learning to realistic high-resolution settings and conduct a large-scale empirical study of disentangled representations on this dataset. We observe that disentanglement is a good predictor for out-of-distribution (OOD) task performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源