论文标题
量子机学习的数据压缩
Data compression for quantum machine learning
论文作者
论文摘要
嘈杂的中级量表量子计算机的出现引入了在机器学习任务中实现量子加速的令人兴奋的可能性。但是,这些设备由少量量子位组成,并且只能忠实地运行短路。这为量子机学习提供了许多建议的方法,超出了当前可用的设备。我们解决了有效压缩和加载经典数据以在量子计算机上使用的问题。我们提出的方法允许对所需的量子数和量子电路的深度进行调节。我们通过使用矩阵态和量子电路之间的对应关系来实现这一目标,并进一步提出了一种硬件有效的量子电路方法,我们将其基于Fashion-Mnist数据集进行了基准测试。最后,我们证明了基于量子电路的分类器可以仅使用11 QUBITS使用当前的张量学习方法实现竞争精度。
The advent of noisy-intermediate scale quantum computers has introduced the exciting possibility of achieving quantum speedups in machine learning tasks. These devices, however, are composed of a small number of qubits, and can faithfully run only short circuits. This puts many proposed approaches for quantum machine learning beyond currently available devices. We address the problem of efficiently compressing and loading classical data for use on a quantum computer. Our proposed methods allow both the required number of qubits and depth of the quantum circuit to be tuned. We achieve this by using a correspondence between matrix-product states and quantum circuits, and further propose a hardware-efficient quantum circuit approach, which we benchmark on the Fashion-MNIST dataset. Finally, we demonstrate that a quantum circuit based classifier can achieve competitive accuracy with current tensor learning methods using only 11 qubits.