论文标题
基于重新拉姆的神经网络模拟中的设备建模偏差
Device Modeling Bias in ReRAM-based Neural Network Simulations
论文作者
论文摘要
数据驱动的建模方法(例如跳台)是模拟电阻随机访问存储器(RERAM)或其他新兴内存设备的有希望的技术,用于硬件神经网络模拟。由于这些表依靠数据插值,这项工作探讨了与它们建模的随机设备行为有关的关于其忠诚的开放问题。我们研究各种跳台设备模型如何影响已达到的网络性能估计,这是我们将其定义为建模偏差的概念。使用具有已知分布的合成数据,用于基准测试用途,以及从Tiox reram设备获得的实验数据,探索了跳台设备建模的两种方法,binning和Optuna优化的binning。在MNIST上训练的多层感知器上的结果表明,基于套筒的设备模型可能会表现得不可预测,尤其是在设备数据集中的点数较低的位置,有时过度预测,有时是不足的目标网络准确性。本文还提出了设备级别的指标,该指标表明网络级别的建模偏置度量标准的趋势相似。提出的方法为未来对具有更好性能的统计设备模型以及实验验证的建模偏差和不同内存计算和神经网络体系结构的偏差开辟了可能性。
Data-driven modeling approaches such as jump tables are promising techniques to model populations of resistive random-access memory (ReRAM) or other emerging memory devices for hardware neural network simulations. As these tables rely on data interpolation, this work explores the open questions about their fidelity in relation to the stochastic device behavior they model. We study how various jump table device models impact the attained network performance estimates, a concept we define as modeling bias. Two methods of jump table device modeling, binning and Optuna-optimized binning, are explored using synthetic data with known distributions for benchmarking purposes, as well as experimental data obtained from TiOx ReRAM devices. Results on a multi-layer perceptron trained on MNIST show that device models based on binning can behave unpredictably particularly at low number of points in the device dataset, sometimes over-promising, sometimes under-promising target network accuracy. This paper also proposes device level metrics that indicate similar trends with the modeling bias metric at the network level. The proposed approach opens the possibility for future investigations into statistical device models with better performance, as well as experimentally verified modeling bias in different in-memory computing and neural network architectures.