论文标题
将数据集蒸馏扩展到ImagEnet-1k,并具有恒定内存
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
论文作者
论文摘要
数据集蒸馏是一个新的新兴领域,旨在将大型数据集蒸馏成小得多且信息丰富的合成数据集,以加速培训并减少存储空间。在各种数据集蒸馏方法中,基于轨迹匹配的方法(MTT)在许多任务中都达到了SOTA性能,例如在CIFAR-10/100上。但是,由于在通过SGD步骤进行优化时,MTT无法扩展到大规模数据集(例如Imagenet-1K),因此,由于内存消耗过高。我们可以将此SOTA方法扩展到Imagenet-1k,并且它对CIFAR转移到Imagenet-1k是否有效?为了回答这些问题,我们首先提出了一个程序,以持续的内存复杂性准确计算展开的梯度,这使我们能够将MTT扩展到Imagenet-1K无缝地缩放,而记忆足迹的减少〜6倍。我们进一步发现,MTT要处理大量类别的数据集,并提出了一种新颖的软标签分配,从而极大地改善其收敛性是一项挑战。由此产生的算法在Imagenet-1k上设置了新的SOTA:我们可以在单个GPU上的Imagenet-1k上最多扩展50个IPC(每类图像)(所有以前的方法只能在Imagenet-1K上缩放到2个IPCS),从而最佳准确性(仅适用于完整的数据集训练的准确性5.9%),同时仅使用4.2%的数据,只有5.9%的数据。我们的代码可在https://github.com/justincui03/tesla上找到
Dataset Distillation is a newly emerging area that aims to distill large datasets into much smaller and highly informative synthetic ones to accelerate training and reduce storage. Among various dataset distillation methods, trajectory-matching-based methods (MTT) have achieved SOTA performance in many tasks, e.g., on CIFAR-10/100. However, due to exorbitant memory consumption when unrolling optimization through SGD steps, MTT fails to scale to large-scale datasets such as ImageNet-1K. Can we scale this SOTA method to ImageNet-1K and does its effectiveness on CIFAR transfer to ImageNet-1K? To answer these questions, we first propose a procedure to exactly compute the unrolled gradient with constant memory complexity, which allows us to scale MTT to ImageNet-1K seamlessly with ~6x reduction in memory footprint. We further discover that it is challenging for MTT to handle datasets with a large number of classes, and propose a novel soft label assignment that drastically improves its convergence. The resulting algorithm sets new SOTA on ImageNet-1K: we can scale up to 50 IPCs (Image Per Class) on ImageNet-1K on a single GPU (all previous methods can only scale to 2 IPCs on ImageNet-1K), leading to the best accuracy (only 5.9% accuracy drop against full dataset training) while utilizing only 4.2% of the number of data points - an 18.2% absolute gain over prior SOTA. Our code is available at https://github.com/justincui03/tesla