论文标题
最佳概率预测:它们何时工作?
Optimal probabilistic forecasts: When do they work?
论文作者
论文摘要
适当的评分规则用于评估概率预测的样本外准确性,并具有不同的评分规则奖励预测性能的不同方面。在此,我们重新调查了使用适当的评分规则来产生根据给定分数“最佳”的概率预测的做法,并根据该分数评估其样本外的精度何时优于替代预测。在预测模型的错误指定下,特别注意相对预测性能。使用数值插图,我们记录了此范式中的一些新发现,这些发现突出了真实数据生成过程,假定的预测模型和评分规则之间的重要相互作用。值得注意的是,我们表明,只有当预测模型与真实过程充分兼容以允许特定的分数标准奖励其旨在奖励的内容时,这种方法才能预测收获收益。但是,遵守这种兼容性,最佳预测的优势将越大,错误指定程度越大。我们在各种不同的情况下探索这些问题,并使用人为模拟和经验数据。
Proper scoring rules are used to assess the out-of-sample accuracy of probabilistic forecasts, with different scoring rules rewarding distinct aspects of forecast performance. Herein, we re-investigate the practice of using proper scoring rules to produce probabilistic forecasts that are `optimal' according to a given score, and assess when their out-of-sample accuracy is superior to alternative forecasts, according to that score. Particular attention is paid to relative predictive performance under misspecification of the predictive model. Using numerical illustrations, we document several novel findings within this paradigm that highlight the important interplay between the true data generating process, the assumed predictive model and the scoring rule. Notably, we show that only when a predictive model is sufficiently compatible with the true process to allow a particular score criterion to reward what it is designed to reward, will this approach to forecasting reap benefits. Subject to this compatibility however, the superiority of the optimal forecast will be greater, the greater is the degree of misspecification. We explore these issues under a range of different scenarios, and using both artificially simulated and empirical data.