论文标题
通过验证的遥感变压器转移学习
Transfer Learning with Pretrained Remote Sensing Transformers
论文作者
论文摘要
尽管遥感(RS)社区已经开始预先介绍变压器(打算在RS任务上进行微调),但尚不清楚这些模型如何在分配变化下执行。在这里,我们在130万颗卫星衍生的RS图像中预算了一个新的RS变压器 - 称为SATVIT-V2,然后对其进行微调(以及其他五个型号),以调查其在训练过程中未见的分布情况。我们根据源生物群落将一个专业标记的土地覆盖数据集分为14个数据集。我们分别在每个生物群上训练每个模型,并在所有其他生物群体上测试它们。总之,这相当于1638年的生物群体转移实验。经过微调后,我们发现SATVIT-V2在分布情况(匹配的生物群落)上的表现优于SATVIT-V1 3.1%,而离分布(不匹配生物群)数据的表现优于2.8%。此外,我们发现从线性探测溶液(即利用LPFT [1])初始化微调可将SATVIT-V2的性能提高1.2%,而分布数据的分布数据为2.4%。接下来,我们发现,经过验证的RS变压器在分布变化下的校准要比未经预告的模型更好,并且利用LPFT会导致模型校准的进一步改进。最后,我们发现五种分配变化的度量与生物群体转移性能相关。我们共享代码和预算的模型权重。 (https://github.com/antofuller/satvit)
Although the remote sensing (RS) community has begun to pretrain transformers (intended to be fine-tuned on RS tasks), it is unclear how these models perform under distribution shifts. Here, we pretrain a new RS transformer--called SatViT-V2--on 1.3 million satellite-derived RS images, then fine-tune it (along with five other models) to investigate how it performs on distributions not seen during training. We split an expertly labeled land cover dataset into 14 datasets based on source biome. We train each model on each biome separately and test them on all other biomes. In all, this amounts to 1638 biome transfer experiments. After fine-tuning, we find that SatViT-V2 outperforms SatViT-V1 by 3.1% on in-distribution (matching biomes) and 2.8% on out-of-distribution (mismatching biomes) data. Additionally, we find that initializing fine-tuning from the linear probed solution (i.e., leveraging LPFT [1]) improves SatViT-V2's performance by another 1.2% on in-distribution and 2.4% on out-of-distribution data. Next, we find that pretrained RS transformers are better calibrated under distribution shifts than non-pretrained models and leveraging LPFT results in further improvements in model calibration. Lastly, we find that five measures of distribution shift are moderately correlated with biome transfer performance. We share code and pretrained model weights. (https://github.com/antofuller/SatViT)