论文标题
使用合成数据集量化深神网络中显着性方法的解释性
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
论文作者
论文摘要
事后分析是可解释的人工智能(XAI)研究的流行类别。特别是,生成热图的方法已被用来解释深神经网络(DNN),即黑盒模型。由于直观和视觉方式可以理解它们,热图可能会很有吸引力,但是评估它们的品质可能并不直接。评估热图质量的不同方法具有自己的优点和缺点。本文介绍了一个合成数据集,该数据集可以与地面热图一起生成,以进行更客观的定量评估。每个样本数据都是具有易于识别特征的单元格的图像,这些图像与定位的地面真相面具区别开来,因此促进了对不同XAI方法的更透明评估。进行了比较和建议,澄清了缺点,并提出了未来研究方向的建议,以处理某些事后分析方法的细节。此外,将mabcam引入了与我们的地面真实热图兼容的热图生成方法。该框架很容易概括,仅使用标准的深度学习组件。
Post-hoc analysis is a popular category in eXplainable artificial intelligence (XAI) study. In particular, methods that generate heatmaps have been used to explain the deep neural network (DNN), a black-box model. Heatmaps can be appealing due to the intuitive and visual ways to understand them but assessing their qualities might not be straightforward. Different ways to assess heatmaps' quality have their own merits and shortcomings. This paper introduces a synthetic dataset that can be generated adhoc along with the ground-truth heatmaps for more objective quantitative assessment. Each sample data is an image of a cell with easily recognized features that are distinguished from localization ground-truth mask, hence facilitating a more transparent assessment of different XAI methods. Comparison and recommendations are made, shortcomings are clarified along with suggestions for future research directions to handle the finer details of select post-hoc analysis methods. Furthermore, mabCAM is introduced as the heatmap generation method compatible with our ground-truth heatmaps. The framework is easily generalizable and uses only standard deep learning components.