论文标题
用于多元时间序列预测的预训练增强的时空图神经网络
Pre-training Enhanced Spatial-temporal Graph Neural Network for Multivariate Time Series Forecasting
论文作者
论文摘要
多元时间序列(MTS)预测在广泛的应用中起着至关重要的作用。最近,时空图神经网络(STGNN)已成为越来越流行的MT预测方法。 STGNN通过图神经网络和顺序模型共同对MTS的空间和时间模式进行建模,从而显着提高了预测准确性。但是受模型复杂性的限制,大多数STGNN仅考虑短期历史MTS数据,例如过去一个小时的数据。但是,需要根据长期的历史MTS数据来分析时间序列的模式及其之间的依赖关系(即时间和空间模式)。为了解决这个问题,我们提出了一个新颖的框架,其中STGNN通过可扩展的时间序列预训练模型(步骤)增强。具体而言,我们设计了一个预训练模型,以从非常长期的历史时间序列(例如,过去两周)中有效地学习时间模式并生成细分级表示。这些表示为短期时间序列输入到STGNN提供了上下文信息,并促进了时间序列之间的建模依赖关系。三个公共现实世界数据集的实验表明,我们的框架能够显着增强下游STGNN,并且我们的训练前模型可恰当地捕获时间模式。
Multivariate Time Series (MTS) forecasting plays a vital role in a wide range of applications. Recently, Spatial-Temporal Graph Neural Networks (STGNNs) have become increasingly popular MTS forecasting methods. STGNNs jointly model the spatial and temporal patterns of MTS through graph neural networks and sequential models, significantly improving the prediction accuracy. But limited by model complexity, most STGNNs only consider short-term historical MTS data, such as data over the past one hour. However, the patterns of time series and the dependencies between them (i.e., the temporal and spatial patterns) need to be analyzed based on long-term historical MTS data. To address this issue, we propose a novel framework, in which STGNN is Enhanced by a scalable time series Pre-training model (STEP). Specifically, we design a pre-training model to efficiently learn temporal patterns from very long-term history time series (e.g., the past two weeks) and generate segment-level representations. These representations provide contextual information for short-term time series input to STGNNs and facilitate modeling dependencies between time series. Experiments on three public real-world datasets demonstrate that our framework is capable of significantly enhancing downstream STGNNs, and our pre-training model aptly captures temporal patterns.