论文标题
数据实例先验(Disp)在生成对抗网络中
Data InStance Prior (DISP) in Generative Adversarial Networks
论文作者
论文摘要
生成对抗网络(GAN)的最新进展在生成高质量图像方面取得了显着进展。但是,这种绩效的增长取决于大量培训数据的可用性。在有限的数据制度中,培训通常会分歧,因此生成的样本质量低,缺乏多样性。以前的工作通过利用转移学习和数据增强技术来解决低数据设置的培训。我们通过利用在有限的/监督的预训练的预训练的预训练的网络中获得的信息,在有限的数据域中提出了一种新型的转移学习方法,该方法在有限的数据域中提出了信息。我们使用各种GAN体系结构(Biggan,Sngan,stylegan2)在几个标准视觉数据集上进行实验,以证明所提出的方法有效地将知识传递给了几乎没有目标图像的域,从图像质量和多样性方面优于现有的先进技术。我们还显示了大规模无条件图像生成中数据实例的实用性。
Recent advances in generative adversarial networks (GANs) have shown remarkable progress in generating high-quality images. However, this gain in performance depends on the availability of a large amount of training data. In limited data regimes, training typically diverges, and therefore the generated samples are of low quality and lack diversity. Previous works have addressed training in low data setting by leveraging transfer learning and data augmentation techniques. We propose a novel transfer learning method for GANs in the limited data domain by leveraging informative data prior derived from self-supervised/supervised pre-trained networks trained on a diverse source domain. We perform experiments on several standard vision datasets using various GAN architectures (BigGAN, SNGAN, StyleGAN2) to demonstrate that the proposed method effectively transfers knowledge to domains with few target images, outperforming existing state-of-the-art techniques in terms of image quality and diversity. We also show the utility of data instance prior in large-scale unconditional image generation.