论文标题
张量网络的堆栈操作
Stack operation of tensor networks
论文作者
论文摘要
张量网络作为张量的面向化,旨在执行正常张量(例如加法,收缩和堆叠)常见的操作。但是,由于其非唯一的网络结构,迄今为止只有张量网络收缩的定义很好。在本文中,我们为张量网络堆栈方法提出了一个数学上严格的定义,该定义将大量张量网络压缩为单个网络,而无需更改其结构和配置。我们以基于矩阵态状态的机器学习为例来说明主要思想。将我们的结果与CPU和GPU上的FO循环和有效的编码方法进行了比较。
The tensor network, as a facterization of tensors, aims at performing the operations that are common for normal tensors, such as addition, contraction and stacking. However, due to its non-unique network structure, only the tensor network contraction is so far well defined. In this paper, we propose a mathematically rigorous definition for the tensor network stack approach, that compress a large amount of tensor networks into a single one without changing their structures and configurations. We illustrate the main ideas with the matrix product states based machine learning as an example. Our results are compared with the for loop and the efficient coding method on both CPU and GPU.