论文标题

使用后高斯导数过程优化功能

Function Optimization with Posterior Gaussian Derivative Process

论文作者

Roy, Sucharita, Bhattacharya, Sourabh

论文摘要

在本文中,我们提出并开发了一种新型的贝叶斯算法,以优化其第一和第二部分衍生物的功能。基本的前提是该函数的高斯过程表示,它诱导了也是高斯的第一个衍生过程。衍生过程集的贝叶斯后解相当于零,给定数据组成的数据包括适当的函数域中输入点及其函数值的选择,模拟了函数的固定点,可以通过在目标函数的第一和第二个衍生词的术语上对先验设置限制来微调。这些观察结果激发了我们提出一种通用有效的算法,以进行功能优化,该算法试图通过内置的迭代阶段适应真正的Optima。我们为该算法提供了理论基础,这几乎可以肯定地融合了真正的Optima,因为迭代阶段的数量往往无穷大。理论基础取决于我们证明与高斯和高斯衍生过程相关的后载体几乎确定的均匀收敛到基础函数及其在适当的固定域填充物设置中的基础函数及其衍生物;收敛速度也可用。我们还使用优化算法中固有的信息对Optima数量进行了贝叶斯表征。我们用五个涉及最大值,最小值,鞍点甚至不确定的示例来说明贝叶斯优化算法。我们的示例范围从简单的一维问题到具有挑战性的50和100维问题。

In this article, we propose and develop a novel Bayesian algorithm for optimization of functions whose first and second partial derivatives are known. The basic premise is the Gaussian process representation of the function which induces a first derivative process that is also Gaussian. The Bayesian posterior solutions of the derivative process set equal to zero, given data consisting of suitable choices of input points in the function domain and their function values, emulate the stationary points of the function, which can be fine-tuned by setting restrictions on the prior in terms of the first and second derivatives of the objective function. These observations motivate us to propose a general and effective algorithm for function optimization that attempts to get closer to the true optima adaptively with in-built iterative stages. We provide theoretical foundation to this algorithm, proving almost sure convergence to the true optima as the number of iterative stages tends to infinity. The theoretical foundation hinges upon our proofs of almost sure uniform convergence of the posteriors associated with Gaussian and Gaussian derivative processes to the underlying function and its derivatives in appropriate fixed-domain infill asymptotics setups; rates of convergence are also available. We also provide Bayesian characterization of the number of optima using information inherent in our optimization algorithm. We illustrate our Bayesian optimization algorithm with five different examples involving maxima, minima, saddle points and even inconclusiveness. Our examples range from simple, one-dimensional problems to challenging 50 and 100-dimensional problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源