论文标题

使用机器学习加速量子近似优化算法

Accelerating Quantum Approximate Optimization Algorithm using Machine Learning

论文作者

Alam, Mahabubul, Ash-Saki, Abdullah, Ghosh, Swaroop

论文摘要

我们提出了一种基于机器学习的方法,以加速量子近似优化算法(QAOA)实现,这是一种有前途的量子经典混合算法,以证明所谓的量子至上。在QAOA中,参数量子电路和经典优化器在闭环中迭代以解决硬组合优化问题。 QAOA的性能随着量子电路中的阶段数量(深度)的增加而提高。但是,引入了两个新参数,每个添加的阶段用于经典优化器,以增加优化循环迭代的数量。我们注意到低深度的参数与更高的QAOA实现之间的相关性,并通过开发机器学习模型来预测接近最佳值的门参数来利用它。结果,优化循环在较少数量的迭代中收敛。我们选择图形最大问题作为使用QAOA解决的原型。我们使用100个不同的QAOA实例执行特征提取程序,并开发具有13,860个最佳参数的训练数据集。我们对回归模型的4种口味和4种经典优化器的四种口味进行了分析。最后,我们表明,所提出的方法可以根据264种图形进行分析,从而使优化迭代的数量平均减少44.9%(高达65.7%)。

We propose a machine learning based approach to accelerate quantum approximate optimization algorithm (QAOA) implementation which is a promising quantum-classical hybrid algorithm to prove the so-called quantum supremacy. In QAOA, a parametric quantum circuit and a classical optimizer iterates in a closed loop to solve hard combinatorial optimization problems. The performance of QAOA improves with increasing number of stages (depth) in the quantum circuit. However, two new parameters are introduced with each added stage for the classical optimizer increasing the number of optimization loop iterations. We note a correlation among parameters of the lower-depth and the higher-depth QAOA implementations and, exploit it by developing a machine learning model to predict the gate parameters close to the optimal values. As a result, the optimization loop converges in a fewer number of iterations. We choose graph MaxCut problem as a prototype to solve using QAOA. We perform a feature extraction routine using 100 different QAOA instances and develop a training data-set with 13,860 optimal parameters. We present our analysis for 4 flavors of regression models and 4 flavors of classical optimizers. Finally, we show that the proposed approach can curtail the number of optimization iterations by on average 44.9% (up to 65.7%) from an analysis performed with 264 flavors of graphs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源