论文标题
强大的深度学习风险范围
Risk Bounds for Robust Deep Learning
论文作者
论文摘要
已经观察到,某些损失函数可以使数据中的深入学习管道与数据中的缺陷进行健全。在本文中,我们通过统计理论支持这些经验发现。我们尤其表明,通过无限的Lipschitz连续损失函数(例如最小吸收性偏差损失,Huber损失,Cauchy损失和Tukey的双重损失)可以提供有效的预测,从而在数据最小化的假设下提供有效的预测。一般而言,我们的论文提供了理论上的证据,证明了深度学习中强大的损失功能的好处。
It has been observed that certain loss functions can render deep-learning pipelines robust against flaws in the data. In this paper, we support these empirical findings with statistical theory. We especially show that empirical-risk minimization with unbounded, Lipschitz-continuous loss functions, such as the least-absolute deviation loss, Huber loss, Cauchy loss, and Tukey's biweight loss, can provide efficient prediction under minimal assumptions on the data. More generally speaking, our paper provides theoretical evidence for the benefits of robust loss functions in deep learning.