论文标题
随机预测的个人校准
Individual Calibration with Randomized Forecasting
论文作者
论文摘要
机器学习应用通常需要校准的预测,例如90 \%可靠的间隔应包含真正的结果90 \%的时间。但是,校准的典型定义只需要平均而保存,并且对单个样本的预测没有保证。因此,预测可以在某些亚组上系统地或不足,从而导致公平性和潜在脆弱性问题。我们表明,如果预测是随机分组的,即输出随机可靠的间隔,则可以在回归设置中进行单个样本的校准。随机化可以通过差异来消除系统偏见。我们设计一个训练目标,以实施个人校准并使用它来训练随机回归功能。对于任意选择的数据亚组,对所得模型进行了更大的校准,并且可以在利用错误校准的预测的对手方面实现更高的效用。
Machine learning applications often require calibrated predictions, e.g. a 90\% credible interval should contain the true outcome 90\% of the times. However, typical definitions of calibration only require this to hold on average, and offer no guarantees on predictions made on individual samples. Thus, predictions can be systematically over or under confident on certain subgroups, leading to issues of fairness and potential vulnerabilities. We show that calibration for individual samples is possible in the regression setup if the predictions are randomized, i.e. outputting randomized credible intervals. Randomization removes systematic bias by trading off bias with variance. We design a training objective to enforce individual calibration and use it to train randomized regression functions. The resulting models are more calibrated for arbitrarily chosen subgroups of the data, and can achieve higher utility in decision making against adversaries that exploit miscalibrated predictions.