论文标题
通过辅助分类器改善自我监督的学习来完成分发任务
Improving Self-supervised Learning for Out-of-distribution Task via Auxiliary Classifier
论文作者
论文摘要
在现实世界的情况下,分布(OOD)数据集可能会从培训数据集中进行巨大的分配变化。当训练有素的分类器部署在不同的动态环境中时,这种现象通常发生,这会导致性能显着下降。为了解决这个问题,我们建议在这项工作中端到端的深度多任务网络。观察OOD任务上的旋转预测(自我监督)精度和语义分类精度之间的牢固关系,我们在多任务网络中引入了一个额外的辅助分类头,以及语义分类和旋转预测头。为了观察该加法分类器在改善旋转预测头的影响,我们提出的学习方法被构成双层优化问题,在该问题中,训练了上层级别以更新语义分类和旋转预测头的参数。在较低级别的优化中,仅通过固定语义分类头的参数来通过语义分类头进行更新。所提出的方法已通过三个看不见的OOD数据集进行了验证,在该数据集中,它比其他两种基线方法表现出了清晰的语义分类精度。我们的代码可在github \ url {https://github.com/harshita-555/ossl}上找到
In real world scenarios, out-of-distribution (OOD) datasets may have a large distributional shift from training datasets. This phenomena generally occurs when a trained classifier is deployed on varying dynamic environments, which causes a significant drop in performance. To tackle this issue, we are proposing an end-to-end deep multi-task network in this work. Observing a strong relationship between rotation prediction (self-supervised) accuracy and semantic classification accuracy on OOD tasks, we introduce an additional auxiliary classification head in our multi-task network along with semantic classification and rotation prediction head. To observe the influence of this addition classifier in improving the rotation prediction head, our proposed learning method is framed into bi-level optimisation problem where the upper-level is trained to update the parameters for semantic classification and rotation prediction head. In the lower-level optimisation, only the auxiliary classification head is updated through semantic classification head by fixing the parameters of the semantic classification head. The proposed method has been validated through three unseen OOD datasets where it exhibits a clear improvement in semantic classification accuracy than other two baseline methods. Our code is available on GitHub \url{https://github.com/harshita-555/OSSL}