论文标题
ICAPSNET:朝向文本分类的可解释的胶囊网络
iCapsNets: Towards Interpretable Capsule Networks for Text Classification
论文作者
论文摘要
许多文本分类应用程序都需要具有令人满意的性能和良好解释性的模型。传统的机器学习方法易于解释,但精度较低。深度学习模型的发展显着提高了表现。但是,深度学习模型通常很难解释。在这项工作中,我们提出了可解释的胶囊网络(ICAPSNET)来弥合此差距。 ICAPSNET使用胶囊来建模语义含义并探索新的方法以提高可解释性。 ICAPSNET的设计与人类的直觉一致,并使其能够产生人类的解释结果。值得注意的是,ICAPSNET可以在本地和全球解释。就局部解释性而言,ICAPSNET提供了一种简单而有效的方法来解释每个数据样本的预测。另一方面,ICAPSNET探索了一种解释该模型的一般行为,实现全球解释性的新颖方式。实验研究表明,我们的ICAPSNET会产生有意义的局部和全球解释结果,而与非解剖方法相比,绩效损失很大。
Many text classification applications require models with satisfying performance as well as good interpretability. Traditional machine learning methods are easy to interpret but have low accuracies. The development of deep learning models boosts the performance significantly. However, deep learning models are typically hard to interpret. In this work, we propose interpretable capsule networks (iCapsNets) to bridge this gap. iCapsNets use capsules to model semantic meanings and explore novel methods to increase interpretability. The design of iCapsNets is consistent with human intuition and enables it to produce human-understandable interpretation results. Notably, iCapsNets can be interpreted both locally and globally. In terms of local interpretability, iCapsNets offer a simple yet effective method to explain the predictions for each data sample. On the other hand, iCapsNets explore a novel way to explain the model's general behavior, achieving global interpretability. Experimental studies show that our iCapsNets yield meaningful local and global interpretation results, without suffering from significant performance loss compared to non-interpretable methods.