论文标题

通过重新归还预测损失的任务感知损失的注释

A Note on Task-Aware Loss via Reweighing Prediction Loss by Decision-Regret

论文作者

Lawless, Connor, Zhou, Angela

论文摘要

在此简短的技术说明中,我们为上下文线性优化提出了一个决策感知学习的基线,该学习可以根据上下文信息预测成本系数时解决随机线性优化。我们提出了一个预测的决策版本,然后才能优化。我们通过(未加权的)试点估算器所产生的决策遗憾重新获得了预测错误,以获取决策意识的预测指标,然后通过从决策意识的预测因素中进行成本预测进行优化。该方法可以作为有限差异,与先前提出的端到端学习算法的梯度的近似近似;它也与先前建议的端到端学习直觉一致。该基线在计算上易于实现,并具有易于可用的重新加权预测序列和线性优化,并且只要预测误差最小化是凸的,就可以通过凸优化实现。从经验上讲,我们证明这种方法可以改善使用错误指定模型的设置的“预测到优化”框架,并且与其他端到端方法具有竞争力。因此,由于其简单性和易用性,我们建议它是端到端和决策学习学习的简单基准。

In this short technical note we propose a baseline for decision-aware learning for contextual linear optimization, which solves stochastic linear optimization when cost coefficients can be predicted based on context information. We propose a decision-aware version of predict-then-optimize. We reweigh the prediction error by the decision regret incurred by an (unweighted) pilot estimator of costs to obtain a decision-aware predictor, then optimize with cost predictions from the decision-aware predictor. This method can be motivated as a finite-difference, iterate-independent approximation of the gradients of previously proposed end-to-end learning algorithms; it is also consistent with previously suggested intuition for end-to-end learning. This baseline is computationally easy to implement with readily available reweighted prediction oracles and linear optimization, and can be implemented with convex optimization so long as the prediction error minimization is convex. Empirically, we demonstrate that this approach can lead to improvements over a "predict-then-optimize" framework for settings with misspecified models, and is competitive with other end-to-end approaches. Therefore, due to its simplicity and ease of use, we suggest it as a simple baseline for end-to-end and decision-aware learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源