论文标题
使用专家变量和layerwise的传播可解释用于ML喷气标记器的AI
Explainable AI for ML jet taggers using expert variables and layerwise relevance propagation
论文作者
论文摘要
提出了一个框架,以从JET子结构标签技术的深神经网络(DNN)分类器中提取和了解决策信息。研究的一般方法是提供专家变量,以增强输入(“专家增强”变量或XAUG变量),然后将layerwise相关性传播(LRP)应用于有或没有XAUG变量的网络。 XAUG变量在网络特定操作(例如卷积或复发)之后与中间层串联,并在网络的最终层中使用。在有或不加入Xaug变量的情况下比较网络的结果表明,XAUG变量可用于解释分类器行为,与低级特征结合使用时,可以增加歧视能力,在某些情况下,可以完全捕获分类器的行为。 LRP技术可用于查找网络正在使用的相关信息,并且与Xaug变量结合使用时,可用于对特征进行排名,从而使一个人找到捕获一部分网络性能的减少功能。在提出的研究中,将XAUG变量添加到低级DNN中,将分类器的效率提高了30-40 \%。除了提高绩效外,还提出了一种量化这些DNN训练中数值不确定性的方法。
A framework is presented to extract and understand decision-making information from a deep neural network (DNN) classifier of jet substructure tagging techniques. The general method studied is to provide expert variables that augment inputs ("eXpert AUGmented" variables, or XAUG variables), then apply layerwise relevance propagation (LRP) to networks both with and without XAUG variables. The XAUG variables are concatenated with the intermediate layers after network-specific operations (such as convolution or recurrence), and used in the final layers of the network. The results of comparing networks with and without the addition of XAUG variables show that XAUG variables can be used to interpret classifier behavior, increase discrimination ability when combined with low-level features, and in some cases capture the behavior of the classifier completely. The LRP technique can be used to find relevant information the network is using, and when combined with the XAUG variables, can be used to rank features, allowing one to find a reduced set of features that capture part of the network performance. In the studies presented, adding XAUG variables to low-level DNNs increased the efficiency of classifiers by as much as 30-40\%. In addition to performance improvements, an approach to quantify numerical uncertainties in the training of these DNNs is presented.