论文标题
避免使用经典深度神经网络的贫瘠的高原
Avoiding Barren Plateaus with Classical Deep Neural Networks
论文作者
论文摘要
变异量子算法(VQA)是嘈杂的中等规模量子设备时代最有希望的算法之一。此类算法是使用参数化u($ \pmbθ$)的经典优化器构建的,该优化器会更新参数$ \pmbθ$,以最大程度地减少成本函数$ c $。对于此任务,通常使用梯度下降方法或其中一种变体。这是一种使用成本函数梯度迭代更新电路参数的方法。然而,文献中的一些作品表明,这种方法患有一种被称为贫瘠高原(BP)的现象。在这项工作中,我们提出了一种减轻BPS的新方法。通常,参数化$ u $中使用的参数$ \pmbθ$是随机生成的。在我们的方法中,它们是从经典神经网络(CNN)获得的。我们表明,此方法除了能够在启动过程中减轻BPS外,还能够减轻VQA培训期间BPS的效果。此外,我们还展示了此方法对不同CNN体系结构的行为。
Variational quantum algorithms (VQAs) are among the most promising algorithms in the era of Noisy Intermediate Scale Quantum Devices. Such algorithms are constructed using a parameterization U($\pmbθ$) with a classical optimizer that updates the parameters $\pmbθ$ in order to minimize a cost function $C$. For this task, in general the gradient descent method, or one of its variants, is used. This is a method where the circuit parameters are updated iteratively using the cost function gradient. However, several works in the literature have shown that this method suffers from a phenomenon known as the Barren Plateaus (BP). In this work, we propose a new method to mitigate BPs. In general, the parameters $\pmbθ$ used in the parameterization $U$ are randomly generated. In our method they are obtained from a classical neural network (CNN). We show that this method, besides to being able to mitigate BPs during startup, is also able to mitigate the effect of BPs during the VQA training. In addition, we also show how this method behaves for different CNN architectures.