论文标题
SVM-PERF的量子扩展几乎是线性时间训练非线性SVM
A quantum extension of SVM-perf for training nonlinear SVMs in almost linear time
论文作者
论文摘要
我们提出了一种用于训练非线性支持向量机(SVM)的量子算法,用于特征空间学习,其中经典输入数据在量子状态的振幅中编码。基于Joachims的经典SVM-PERF算法,我们的算法具有一个运行时间,该算法在训练示例数量$ M $(最大的polyrogarithmic因子)中线性缩放,并适用于标准的软质量固定$ \ elly_1 $ -sh_1 $ -ssvm型号。相比之下,尽管经典的SVM-PERF在线性和非线性SVM上都表现出令人印象深刻的性能,但仅在某些情况下保证其效率:仅在某些情况下才能实现线性$ M $缩放的线性SVM,而在原始输入数据领域或低阶段或移位率不排名或偏移率不合时宜的情况下进行分类。同样,以前提出的量子算法要么具有$ m $的超级线性缩放,要么适用于不同的SVM型号,例如硬质量或最小二乘$ \ ell_2 $ -SVM,这些型号缺乏某些可取的soft-margin $ \ ell_ell_1 $ -ssvm型号。我们经典地模拟了算法,并提供证据表明它在实践中的表现可以很好,而不仅仅是渐近的大数据集。
We propose a quantum algorithm for training nonlinear support vector machines (SVM) for feature space learning where classical input data is encoded in the amplitudes of quantum states. Based on the classical SVM-perf algorithm of Joachims, our algorithm has a running time which scales linearly in the number of training examples $m$ (up to polylogarithmic factors) and applies to the standard soft-margin $\ell_1$-SVM model. In contrast, while classical SVM-perf has demonstrated impressive performance on both linear and nonlinear SVMs, its efficiency is guaranteed only in certain cases: it achieves linear $m$ scaling only for linear SVMs, where classification is performed in the original input data space, or for the special cases of low-rank or shift-invariant kernels. Similarly, previously proposed quantum algorithms either have super-linear scaling in $m$, or else apply to different SVM models such as the hard-margin or least squares $\ell_2$-SVM which lack certain desirable properties of the soft-margin $\ell_1$-SVM model. We classically simulate our algorithm and give evidence that it can perform well in practice, and not only for asymptotically large data sets.