论文标题

提拉米苏:用于密集和稀疏深度学习的多面体编译器

TIRAMISU: A Polyhedral Compiler for Dense and Sparse Deep Learning

论文作者

Baghdadi, Riyadh, Debbagh, Abdelkader Nadir, Abdous, Kamel, Benhamida, Fatima Zohra, Renda, Alex, Frankle, Jonathan Elliott, Carbin, Michael, Amarasinghe, Saman

论文摘要

在本文中,我们演示了一个可以优化稀疏和复发性神经网络的编译器,目前两者都超出了现有神经网络编译器的范围(这里稀疏的神经网络代表可以使用稀疏张量张量代数的网络来加速的网络)。我们的演示包括将稀疏和经常性神经网络映射到多面体模型,以及我们在最先进的多面体编译器Tiramisu的方法的实现。我们在一组深度学习的基准测试中评估了我们的方法,并将结果与​​手工优化的工业图书馆进行了比较。我们的结果表明,我们的方法至少与英特尔MKL-DNN匹配,在某些情况下,我们的方法比5倍(在多核CPU上)优于它。

In this paper, we demonstrate a compiler that can optimize sparse and recurrent neural networks, both of which are currently outside of the scope of existing neural network compilers (sparse neural networks here stand for networks that can be accelerated with sparse tensor algebra techniques). Our demonstration includes a mapping of sparse and recurrent neural networks to the polyhedral model along with an implementation of our approach in TIRAMISU, our state-of-the-art polyhedral compiler. We evaluate our approach on a set of deep learning benchmarks and compare our results with hand-optimized industrial libraries. Our results show that our approach at least matches Intel MKL-DNN and in some cases outperforms it by 5x (on multicore-CPUs).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源