论文标题
语言奖励模型的不确定性估计
Uncertainty Estimation for Language Reward Models
论文作者
论文摘要
语言模型可以从无监督的文本语料库中学习一系列功能。但是,要解决特定问题(例如文本摘要),通常需要在特定于任务的数据集中微调它们。人类通常在选项之间选择比提供标记的数据更容易选择,并且先前的工作通过从这种偏好比较中训练奖励模型来实现最先进的绩效。但是,收集较大的偏好比较数据集仍然很昂贵 - 并且学习的奖励模型是不可靠的分发。我们试图通过不确定性估计来解决这些问题,这些问题可以通过主动学习和规避风险的增强学习(RL)提高样本效率和鲁棒性。具体来说,我们使用bootstrap聚合(装袋)来训练一组奖励模型,这些模型在其最终层的初始化方面有所不同。事实证明,合奏在积极学习的事先应用中取得了成功,但是我们发现在我们的设置中,主动学习并不能超越随机抽样。进一步的实验表明,尽管总体预测是良好校准的,但整体的估计认知不确定性仅与模型误差弱相关。我们怀疑这是因为合奏成员是从单个模型中微调的,因此相似。这表明需要修改当前的训练方法以支持不确定性估计,例如通过培训多种语言模型。
Language models can learn a range of capabilities from unsupervised training on text corpora. However, to solve a particular problem (such as text summarization) it is typically necessary to fine-tune them on a task-specific dataset. It is often easier for humans to choose between options than to provide labeled data, and prior work has achieved state-of-the-art performance by training a reward model from such preference comparisons. However, collecting a large preference comparison dataset is still expensive -- and the learned reward models are unreliable out-of-distribution. We seek to address these problems via uncertainty estimation, which can improve sample efficiency and robustness using active learning and risk-averse reinforcement learning (RL). Specifically, we use bootstrap aggregating (bagging) to train an ensemble of reward models differing in the initialization of their final layer. Ensembles have proved successful in prior applications of active learning, but we find that in our setting ensemble active learning does not outperform random sampling. Further experiments show that while the aggregate predictions are well-calibrated, the ensemble's estimated epistemic uncertainty is only weakly correlated with model error. We suspect this is because the ensemble members are fine-tuned from a single model and so are similar to one another. This suggests current pre-training methods will need to be modified to support uncertainty estimation, e.g. by training multiple language models.