论文标题
利用可解释的AI提高神经网络的性能
Utilizing Explainable AI for improving the Performance of Neural Networks
论文作者
论文摘要
如今,深层神经网络被广泛用于对社会有直接影响的各种领域。尽管这些型号通常表现出出色的性能,但它们已被用作黑匣子很长时间了。为了解决这个问题,可解释的人工智能(XAI)一直在发展为旨在提高模型透明度并提高其可信赖性的领域。我们提出了一条经过重新研究管道,该管道始终从XAI开始并利用最新技术开始改善模型预测。为此,我们使用XAI结果,即Shapley添加说明(SHAP)值,为数据样本提供了特定的训练权重。这导致了对模型的改进培训,从而提高了更好的性能。为了基准我们的方法,我们对现实生活和公共数据集进行了评估。首先,我们在基于雷达的人计数方案上执行该方法。之后,我们在公共计算机视觉数据集的CIFAR-10上测试它。使用基于SHAP的重新培训方法的实验实现了4%的精度W.R.T.对人计算任务的标准相等重量再培训。此外,在CIFAR-10上,我们基于SHAP的加权策略的准确率比具有相等加权样品的训练程序的精度为3%。
Nowadays, deep neural networks are widely used in a variety of fields that have a direct impact on society. Although those models typically show outstanding performance, they have been used for a long time as black boxes. To address this, Explainable Artificial Intelligence (XAI) has been developing as a field that aims to improve the transparency of the model and increase their trustworthiness. We propose a retraining pipeline that consistently improves the model predictions starting from XAI and utilizing state-of-the-art techniques. To do that, we use the XAI results, namely SHapley Additive exPlanations (SHAP) values, to give specific training weights to the data samples. This leads to an improved training of the model and, consequently, better performance. In order to benchmark our method, we evaluate it on both real-life and public datasets. First, we perform the method on a radar-based people counting scenario. Afterward, we test it on the CIFAR-10, a public Computer Vision dataset. Experiments using the SHAP-based retraining approach achieve a 4% more accuracy w.r.t. the standard equal weight retraining for people counting tasks. Moreover, on the CIFAR-10, our SHAP-based weighting strategy ends up with a 3% accuracy rate than the training procedure with equal weighted samples.