论文标题

深度神经网络的阶级干扰

Class Interference of Deep Neural Networks

论文作者

Diao, Dongcui, Yao, Hengshuai, Jiang, Bei

论文摘要

对于人类而言,识别和说出类似物体甚至很难。在本文中,我们表明,对所有深神经网络都有一种阶级干扰现象。类干扰代表数据的学习难度,它构成了深网的概括错误的最大百分比。为了了解班级干扰,我们提出了跨阶级测试,类自我方向和干扰模型。我们展示了如何使用这些定义来研究训练有素的模型的最小平坦度和班级干扰。我们还展示了如何通过标签舞蹈模式和班级舞蹈注释在训练过程中检测课堂干扰。

Recognizing and telling similar objects apart is even hard for human beings. In this paper, we show that there is a phenomenon of class interference with all deep neural networks. Class interference represents the learning difficulty in data, and it constitutes the largest percentage of generalization errors by deep networks. To understand class interference, we propose cross-class tests, class ego directions and interference models. We show how to use these definitions to study minima flatness and class interference of a trained model. We also show how to detect class interference during training through label dancing pattern and class dancing notes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源