论文标题
一类可扩展的分布鲁棒性,并具有保证
Scalable Distributional Robustness in a Class of Non Convex Optimization with Guarantees
论文作者
论文摘要
分布鲁棒优化(DRO)在提供鲁棒性以及基于样本的优化问题方面表现出很大的希望。我们努力为一类分数和非凸优化提供DRO解决方案,用于在设施位置和安全性游戏等突出领域的决策。与以前的工作相反,我们发现优化等效方差的DRO而不是Minimax形式的等效方差正常形式更加易于处理。我们将差异正规形式转换为混合批准二阶锥体程序(MISOCP),尽管该计划保证了几乎全球最优性,但并没有足够的扩展来解决现实世界数据集的问题。我们进一步提出了两种基于聚类和分层采样的抽象方法以提高可扩展性,然后将其用于现实世界数据集。重要的是,我们为我们的方法提供了几乎全球最优性保证,并在实验上表明我们的解决方案质量比最新的基于梯度的方法所实现的质量要好。我们在实验上比较了不同的方法和基准,并揭示了DRO溶液的细微特性。
Distributionally robust optimization (DRO) has shown lot of promise in providing robustness in learning as well as sample based optimization problems. We endeavor to provide DRO solutions for a class of sum of fractionals, non-convex optimization which is used for decision making in prominent areas such as facility location and security games. In contrast to previous work, we find it more tractable to optimize the equivalent variance regularized form of DRO rather than the minimax form. We transform the variance regularized form to a mixed-integer second order cone program (MISOCP), which, while guaranteeing near global optimality, does not scale enough to solve problems with real world data-sets. We further propose two abstraction approaches based on clustering and stratified sampling to increase scalability, which we then use for real world data-sets. Importantly, we provide near global optimality guarantees for our approach and show experimentally that our solution quality is better than the locally optimal ones achieved by state-of-the-art gradient-based methods. We experimentally compare our different approaches and baselines, and reveal nuanced properties of a DRO solution.