论文标题

随机优化森林

Stochastic Optimization Forests

论文作者

Kallus, Nathan, Mao, Xiaojie

论文摘要

我们研究上下文随机优化问题,在该问题中,我们利用丰富的辅助观察(例如产品特征)来通过不确定的变量(例如需求)来改善决策。我们通过种植选择分裂以直接优化下游决策质量的树木来展示如何训练森林决策政策,而不是像标准的随机森林算法那样分裂以提高预测准确性。我们通过开发近似分解标准来意识到这个看似棘手的问题,该标准利用优化的扰动分析来避免对每个候选人拆分的繁琐重新挑选,以便我们的方法扩展到大规模问题。我们证明,我们的分裂标准始终近似于真正的风险,并且我们的方法实现了渐近的最优性。我们从经验上广泛地验证了我们的方法,证明了森林的优化感知构造的价值以及我们有效近似的成功。我们表明,我们的近似分组标准可以缩短运行时间百倍,同时达到接近森林算法的性能,这对于每个候选人分裂都重新升级。

We study contextual stochastic optimization problems, where we leverage rich auxiliary observations (e.g., product characteristics) to improve decision making with uncertain variables (e.g., demand). We show how to train forest decision policies for this problem by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm. We realize this seemingly computationally intractable problem by developing approximate splitting criteria that utilize optimization perturbation analysis to eschew burdensome re-optimization for every candidate split, so that our method scales to large-scale problems. We prove that our splitting criteria consistently approximate the true risk and that our method achieves asymptotic optimality. We extensively validate our method empirically, demonstrating the value of optimization-aware construction of forests and the success of our efficient approximations. We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源