论文标题
TSEM:多元时间序列的时间加权时空可解释的神经网络
TSEM: Temporally Weighted Spatiotemporal Explainable Neural Network for Multivariate Time Series
论文作者
论文摘要
由于其灵活性和适应性,深度学习已成为技术和业务领域的一定程度的解决方案。它是使用不透明模型实施的,不幸的是,这破坏了结果的可信度。为了更好地了解系统的行为,尤其是由时间序列驱动的系统的行为,在深度学习模型中,所谓的可解释的人工智能(XAI)方法是重要的。时间序列数据有两种主要类型的XAI类型,即模型不可屈服和特定于模型。在这项工作中考虑了模型特定的方法。尽管其他方法采用了类激活映射(CAM)或注意机制,但我们将两种策略合并为单个系统,简称为时间加权的时空可解释的可解释的多变量时间序列(TSEM)。 TSEM结合了RNN和CNN模型的功能,使RNN隐藏单元被用作CNN具有临时轴的注意力权重。结果表明TSEM的表现优于XCM。就准确性而言,它与Stam相似,同时还满足了许多解释性标准,包括因果关系,忠诚度和时空性。
Deep learning has become a one-size-fits-all solution for technical and business domains thanks to its flexibility and adaptability. It is implemented using opaque models, which unfortunately undermines the outcome trustworthiness. In order to have a better understanding of the behavior of a system, particularly one driven by time series, a look inside a deep learning model so-called posthoc eXplainable Artificial Intelligence (XAI) approaches, is important. There are two major types of XAI for time series data, namely model-agnostic and model-specific. Model-specific approach is considered in this work. While other approaches employ either Class Activation Mapping (CAM) or Attention Mechanism, we merge the two strategies into a single system, simply called the Temporally Weighted Spatiotemporal Explainable Neural Network for Multivariate Time Series (TSEM). TSEM combines the capabilities of RNN and CNN models in such a way that RNN hidden units are employed as attention weights for the CNN feature maps temporal axis. The result shows that TSEM outperforms XCM. It is similar to STAM in terms of accuracy, while also satisfying a number of interpretability criteria, including causality, fidelity, and spatiotemporality.