论文标题
通过选择性微调神经网络减少上下文干扰
Contextual Interference Reduction by Selective Fine-Tuning of Neural Networks
论文作者
论文摘要
前景目标对象和背景背景的特征分离尚未完全完成。缺乏网络可解释性阻止了特征分离和更好的概括鲁棒性。我们研究上下文在这项工作中干扰脱离前景目标对象表示的作用。我们假设,由于卷积网络的密集分层参数化,周围环境的表示与前景对象密切相关。在一个受益于自下而上和自上而下的处理范式中的框架上,我们研究了一种系统的方法,将前馈网络中学习的表示形式从对不相关环境的重点转移到前景对象。自上而下的处理提供了重要的地图,作为网络内部自我解释的手段,将指导学习算法专注于相关的前景区域,以实现更强大的表示。我们定义了使用MNIST数据集强调上下文的作用的实验评估设置。实验结果不仅表明标签预测精度得到提高,而且还获得了使用各种噪声产生方法对背景扰动的较高程度的鲁棒性。
Feature disentanglement of the foreground target objects and the background surrounding context has not been yet fully accomplished. The lack of network interpretability prevents advancing for feature disentanglement and better generalization robustness. We study the role of the context on interfering with a disentangled foreground target object representation in this work. We hypothesize that the representation of the surrounding context is heavily tied with the foreground object due to the dense hierarchical parametrization of convolutional networks with under-constrained learning algorithms. Working on a framework that benefits from the bottom-up and top-down processing paradigms, we investigate a systematic approach to shift learned representations in feedforward networks from the emphasis on the irrelevant context to the foreground objects. The top-down processing provides importance maps as the means of the network internal self-interpretation that will guide the learning algorithm to focus on the relevant foreground regions towards achieving a more robust representations. We define an experimental evaluation setup with the role of context emphasized using the MNIST dataset. The experimental results reveal not only that the label prediction accuracy is improved but also a higher degree of robustness to the background perturbation using various noise generation methods is obtained.