论文标题
上下文线性优化的快速率
Fast Rates for Contextual Linear Optimization
论文作者
论文摘要
将侧面观察纳入决策可以降低不确定性并提高性能,但这也需要我们解决潜在的复杂预测关系。虽然可以使用现成的机器学习方法单独学习预测模型并将其插入,但各种最近的方法却通过拟合模型以直接优化下游决策性能来整合估计和优化。令人惊讶的是,在上下文线性优化的情况下,我们表明,天真的插件方法实际上达到了遗憾收敛率的速度要比直接优化下游决策性能的方法要快得多。我们通过利用特定问题实例没有任意差的近二级变化的事实来证明这一点。尽管在我们讨论和以数字为单论时,还有其他利弊,但我们的结果突出了企业的细微景象,以整合估计和优化。我们的结果总体上是实践的积极:预测模型易于使用现有工具,易于解释,并且正如我们所展示的那样,可以做出表现良好的决策。
Incorporating side observations in decision making can reduce uncertainty and boost performance, but it also requires we tackle a potentially complex predictive relationship. While one may use off-the-shelf machine learning methods to separately learn a predictive model and plug it in, a variety of recent methods instead integrate estimation and optimization by fitting the model to directly optimize downstream decision performance. Surprisingly, in the case of contextual linear optimization, we show that the naive plug-in approach actually achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance. We show this by leveraging the fact that specific problem instances do not have arbitrarily bad near-dual-degeneracy. While there are other pros and cons to consider as we discuss and illustrate numerically, our results highlight a nuanced landscape for the enterprise to integrate estimation and optimization. Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.