论文标题

回归前馈神经网络的全球定量鲁棒性

Global quantitative robustness of regression feed-forward neural networks

论文作者

Werner, Tino

论文摘要

神经网络是许多复杂学习任务的必不可少的模型类。尽管神经网络的流行和重要性以及从文献稳定和鲁棒化的文献中的许多不同的既定技术,但在神经网络的背景下,迄今为止,稳健统计数据的经典概念很少被认为。因此,我们将回归分解点的概念调整为回归神经网络,并计算不同馈送网络配置和污染设置的分解点。在一项广泛的仿真研究中,我们比较了通过样本外损失,分解速率和训练步骤,在多种不同构型中的非舒适和鲁棒回归前馈神经网络来衡量的。结果确实激发了使用强大的损失功能进行神经网络训练。

Neural networks are an indispensable model class for many complex learning tasks. Despite the popularity and importance of neural networks and many different established techniques from literature for stabilization and robustification of the training, the classical concepts from robust statistics have rarely been considered so far in the context of neural networks. Therefore, we adapt the notion of the regression breakdown point to regression neural networks and compute the breakdown point for different feed-forward network configurations and contamination settings. In an extensive simulation study, we compare the performance, measured by the out-of-sample loss, by a proxy of the breakdown rate and by the training steps, of non-robust and robust regression feed-forward neural networks in a plethora of different configurations. The results indeed motivate to use robust loss functions for neural network training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源