论文标题
非侧重多尺寸贝叶斯优化
Non-Myopic Multifidelity Bayesian Optimization
论文作者
论文摘要
贝叶斯优化是黑匣子功能优化的流行框架。多重方法方法可以通过利用昂贵目标功能的低保真表示来加速贝叶斯优化。流行的多重贝叶斯策略依赖于抽样政策,这些策略解释了以特定意见评估目标函数的立即奖励,从而无法获得更大的信息收益,这些收益可能会获得更多的步骤。本文提出了一个非侧重的多重贝叶斯框架,以掌握优化的未来步骤的长期奖励。我们的计算策略具有两步的lookahead多额度采集函数,可最大程度地提高累积奖励,从而测量解决方案改进的累积奖励。我们证明,在流行的基准优化问题上,提出的算法优于标准的多倍贝叶斯框架。
Bayesian optimization is a popular framework for the optimization of black box functions. Multifidelity methods allows to accelerate Bayesian optimization by exploiting low-fidelity representations of expensive objective functions. Popular multifidelity Bayesian strategies rely on sampling policies that account for the immediate reward obtained evaluating the objective function at a specific input, precluding greater informative gains that might be obtained looking ahead more steps. This paper proposes a non-myopic multifidelity Bayesian framework to grasp the long-term reward from future steps of the optimization. Our computational strategy comes with a two-step lookahead multifidelity acquisition function that maximizes the cumulative reward obtained measuring the improvement in the solution over two steps ahead. We demonstrate that the proposed algorithm outperforms a standard multifidelity Bayesian framework on popular benchmark optimization problems.