论文标题

多目标机器学习中的概括

Generalization In Multi-Objective Machine Learning

论文作者

Súkeník, Peter, Lampert, Christoph H.

论文摘要

现代机器学习任务通常不仅需要考虑一个而是多个目标。例如,除了预测质量外,这可能是学识渊博的模型的效率,稳健性或公平性或其任何组合的效率,稳健性或公平性。多目标学习为处理此类问题提供了自然框架,而无需提交早期权衡。令人惊讶的是,到目前为止,统计学习理论几乎没有深入了解多目标学习的概括属性。在这项工作中,我们采取了第一个步骤来填补这一空白:我们为多目标设置建立了基础概括界限,以及通过标量化学习的概括和多余的界限。我们还提供了对真实目标的帕累托最佳集合与他们从训练数据中经验近似的帕累托最佳选择集之间的关系分析。特别是,我们表现出令人惊讶的不对称性:所有帕累托最佳的解决方案都可以通过经验上的帕累托(Pareto)优势近似,反之亦然。

Modern machine learning tasks often require considering not just one but multiple objectives. For example, besides the prediction quality, this could be the efficiency, robustness or fairness of the learned models, or any of their combinations. Multi-objective learning offers a natural framework for handling such problems without having to commit to early trade-offs. Surprisingly, statistical learning theory so far offers almost no insight into the generalization properties of multi-objective learning. In this work, we make first steps to fill this gap: we establish foundational generalization bounds for the multi-objective setting as well as generalization and excess bounds for learning with scalarizations. We also provide the first theoretical analysis of the relation between the Pareto-optimal sets of the true objectives and the Pareto-optimal sets of their empirical approximations from training data. In particular, we show a surprising asymmetry: all Pareto-optimal solutions can be approximated by empirically Pareto-optimal ones, but not vice versa.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源