论文标题
基于内核的量子随机森林,用于改进分类
A kernel-based quantum random forest for improved classification
论文作者
论文摘要
量子机学习(QML)的出现以增强传统的古典学习方法已经看到了其实现的各种局限性。因此,必须开发具有独特模型假设的量子模型,以获得表达和计算优势。在这项工作中,我们扩展了通过量子内核估计(QKE)计算的内核函数的线性量子支持向量机(QSVM),以形成一个由QSVM节点的有针对性的无符号构建的决策树分类器 - 我们称量子随机森林(QRF)的整体。为了限制过度拟合,我们进一步扩展了模型,以对内核矩阵采用低级别的nyström近似。我们在模型上提供了概括误差界限,并且理论保证可以限制NyStröm-QKE策略的有限采样引起的错误。在此过程中,我们表明,与QKE相比,我们可以实现较低的采样复杂性。我们从数值上说明了不同模型超参数的效果,并最终证明QRF能够在QSVM上获得卓越的性能,同时也需要较少的内核估计。
The emergence of Quantum Machine Learning (QML) to enhance traditional classical learning methods has seen various limitations to its realisation. There is therefore an imperative to develop quantum models with unique model hypotheses to attain expressional and computational advantage. In this work we extend the linear quantum support vector machine (QSVM) with kernel function computed through quantum kernel estimation (QKE), to form a decision tree classifier constructed from a decision directed acyclic graph of QSVM nodes - the ensemble of which we term the quantum random forest (QRF). To limit overfitting, we further extend the model to employ a low-rank Nyström approximation to the kernel matrix. We provide generalisation error bounds on the model and theoretical guarantees to limit errors due to finite sampling on the Nyström-QKE strategy. In doing so, we show that we can achieve lower sampling complexity when compared to QKE. We numerically illustrate the effect of varying model hyperparameters and finally demonstrate that the QRF is able obtain superior performance over QSVMs, while also requiring fewer kernel estimations.