论文标题
层次注意网络的修剪和Sparsemax方法
Pruning and Sparsemax Methods for Hierarchical Attention Networks
论文作者
论文摘要
This paper introduces and evaluates two novel Hierarchical Attention Network models [Yang et al., 2016] - i) Hierarchical Pruned Attention Networks, which remove the irrelevant words and sentences from the classification process in order to reduce potential noise in the document classification accuracy and ii) Hierarchical Sparsemax Attention Networks, which replace the Softmax function used in the attention mechanism with the Sparsemax [Martins and Astudillo, [2016],能够更好地处理重要的分布,其中许多单词或句子的概率很低。我们对IMDB评论的情感分析数据集的经验评估表明,这两种方法都可以匹配由当前最新技术获得的结果(但是,没有任何重大好处)。我们所有的源代码均可提供Athttps://github.com/jmribeiro/dsl-project。
This paper introduces and evaluates two novel Hierarchical Attention Network models [Yang et al., 2016] - i) Hierarchical Pruned Attention Networks, which remove the irrelevant words and sentences from the classification process in order to reduce potential noise in the document classification accuracy and ii) Hierarchical Sparsemax Attention Networks, which replace the Softmax function used in the attention mechanism with the Sparsemax [Martins and Astudillo, 2016], capable of better handling importance distributions where a lot of words or sentences have very low probabilities. Our empirical evaluation on the IMDB Review for sentiment analysis datasets shows both approaches to be able to match the results obtained by the current state-of-the-art (without, however, any significant benefits). All our source code is made available athttps://github.com/jmribeiro/dsl-project.