论文标题
调整视觉语言模型的任务残差
Task Residual for Tuning Vision-Language Models
论文作者
论文摘要
在数十亿级数据中预先训练的大规模视觉模型(VLM)已经学习了一般的视觉表示和广泛的视觉概念。原则上,当通过有限的数据转移到下游任务时,VLM的学习知识结构应适当继承。但是,大多数现有的有效传输学习(ETL)方法对VLMS损害或过度偏向先验知识,例如,及时调整(PT)丢弃了预先培训的基于文本的分类器,并构建了一个新的分类器,而适配器式的调整(在AT)完全依赖于预先培训的功能。为了解决这个问题,我们为名为“任务剩余调整”(任务)的VLMS提出了一种新的有效调整方法,该方法直接在基于文本的分类器上执行,并明确地将预先训练的模型的先验知识和有关目标任务的新知识。具体而言,Taskes可以将原始分类器的权重从VLMS中保留,并通过将一组不依赖于先前独立的参数调整为原始剩余参数,从而获得了目标任务的新分类器,该参数可以实现可靠的先验知识保存和特定于任务特定于任务的知识探索。所提出的任务很简单却有效,在11个基准数据集上,它极大地超过了先前的ETL方法(例如PT和AT),同时需要最少的实施努力。我们的代码可在https://github.com/geekyutao/taskres上找到。
Large-scale vision-language models (VLMs) pre-trained on billion-level data have learned general visual representations and broad visual concepts. In principle, the well-learned knowledge structure of the VLMs should be inherited appropriately when being transferred to downstream tasks with limited data. However, most existing efficient transfer learning (ETL) approaches for VLMs either damage or are excessively biased towards the prior knowledge, e.g., prompt tuning (PT) discards the pre-trained text-based classifier and builds a new one while adapter-style tuning (AT) fully relies on the pre-trained features. To address this, we propose a new efficient tuning approach for VLMs named Task Residual Tuning (TaskRes), which performs directly on the text-based classifier and explicitly decouples the prior knowledge of the pre-trained models and new knowledge regarding a target task. Specifically, TaskRes keeps the original classifier weights from the VLMs frozen and obtains a new classifier for the target task by tuning a set of prior-independent parameters as a residual to the original one, which enables reliable prior knowledge preservation and flexible task-specific knowledge exploration. The proposed TaskRes is simple yet effective, which significantly outperforms previous ETL methods (e.g., PT and AT) on 11 benchmark datasets while requiring minimal effort for the implementation. Our code is available at https://github.com/geekyutao/TaskRes.