论文标题

Stannis:使用计算存储的深神经网络训练的低功率加速

STANNIS: Low-Power Acceleration of Deep Neural Network Training Using Computational Storage

论文作者

HeydariGorji, Ali, Torabzadehkashi, Mahdi, Rezaei, Siavash, Bobarshad, Hossein, Alves, Vladimir, Chou, Pai H.

论文摘要

本文提出了一个在计算存储设备簇上分布式的神经网络存储培训的框架。这样的设备不仅包含硬件加速器,而且还消除了主机和存储之间的数据移动,从而改善了性能和节能。更重要的是,这种储存后的培训方式可确保私人数据永远不会使存储空间完全控制公共数据的共享。实验结果表现出高达2.7倍的速度,能源消耗降低了69%,准确性无明显损失。

This paper proposes a framework for distributed, in-storage training of neural networks on clusters of computational storage devices. Such devices not only contain hardware accelerators but also eliminate data movement between the host and storage, resulting in both improved performance and power savings. More importantly, this in-storage processing style of training ensures that private data never leaves the storage while fully controlling the sharing of public data. Experimental results show up to 2.7x speedup and 69% reduction in energy consumption and no significant loss in accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源