论文标题
深神经网络的基于窗户的分配转移检测
Window-Based Distribution Shift Detection for Deep Neural Networks
论文作者
论文摘要
为了在生产中部署和运行深层神经模型,必须监控和评估其预测质量,这些预测的质量可能会因输入分布偏差而恶意污染或操纵。具体而言,我们研究了收到数据流的深神经网络(DNN)的健康操作的案例,目的是检测输入分布偏差,这可能会损坏网络预测的质量。使用选择性预测原理,我们提出了DNNS的分布偏差检测方法。所提出的方法来自于从真实的基础分布中得出的实例中计算出的紧密覆盖范围概括。基于此界限,我们的检测器不断监视网络在测试窗口中的操作,并在检测到偏差时发射警报。我们的新型检测方法比最新的检测方法更高或更好,同时消耗了较低的计算时间(减少五个数量级)和空间复杂性。与以前的方法不同,它至少需要线性依赖于每个检测的源分布的大小,使它们不适用于``Google cale cale''数据集,我们的方法消除了这种依赖性,使其适用于现实世界应用程序。
To deploy and operate deep neural models in production, the quality of their predictions, which might be contaminated benignly or manipulated maliciously by input distributional deviations, must be monitored and assessed. Specifically, we study the case of monitoring the healthy operation of a deep neural network (DNN) receiving a stream of data, with the aim of detecting input distributional deviations over which the quality of the network's predictions is potentially damaged. Using selective prediction principles, we propose a distribution deviation detection method for DNNs. The proposed method is derived from a tight coverage generalization bound computed over a sample of instances drawn from the true underlying distribution. Based on this bound, our detector continuously monitors the operation of the network out-of-sample over a test window and fires off an alarm whenever a deviation is detected. Our novel detection method performs on-par or better than the state-of-the-art, while consuming substantially lower computation time (five orders of magnitude reduction) and space complexities. Unlike previous methods, which require at least linear dependence on the size of the source distribution for each detection, rendering them inapplicable to ``Google-Scale'' datasets, our approach eliminates this dependence, making it suitable for real-world applications.