论文标题
具有梯度下降的单个神经元的不可知论
Agnostic Learning of a Single Neuron with Gradient Descent
论文作者
论文摘要
我们考虑学习最合适的单神经元的问题,如预期的正方形损失$ \ mathbb {e} _ {(x,x,y)\ sim \ sim \ mathcal {d}} [[(σ(σ(w^\ top x)-y)-y)^2] $,在某些不知名的关节分布$ \ nation dess $上,通过empiment a empins a em em em em em em em em em em em em em em em em em em em em em em em em em em em em em em a I.I.D.样本$ s \ sim \ Mathcal {d}^n $。激活函数$σ$是一个任意的LIPSCHITZ和非降压函数,使优化问题非convex和非平滑词,涵盖了典型的神经网络激活函数和广义线性模型设置中的逆链接函数。 In the agnostic PAC learning setting, where no assumption on the relationship between the labels $y$ and the input $x$ is made, if the optimal population risk is $\mathsf{OPT}$, we show that gradient descent achieves population risk $O(\mathsf{OPT})+ε$ in polynomial time and sample complexity when $σ$ is strictly increasing.对于Relu激活,我们的人口风险保证为$ O(\ Mathsf {opt}^{1/2})+ε$。当标签以$ y =σ(v^\ top x) +ξ$表示零均值的次高噪声$ξ$时,我们表明,梯度下降的人口风险保证可以提高到$ \ sathsf {opt} +ε$。我们的样本复杂性和运行时保证是(几乎)独立的,并且当$σ$严格增加时,不需要超越边界的分配假设。对于Relu,我们在非修复性假设下显示了输入边际分布的相同结果。
We consider the problem of learning the best-fitting single neuron as measured by the expected square loss $\mathbb{E}_{(x,y)\sim \mathcal{D}}[(σ(w^\top x)-y)^2]$ over some unknown joint distribution $\mathcal{D}$ by using gradient descent to minimize the empirical risk induced by a set of i.i.d. samples $S\sim \mathcal{D}^n$. The activation function $σ$ is an arbitrary Lipschitz and non-decreasing function, making the optimization problem nonconvex and nonsmooth in general, and covers typical neural network activation functions and inverse link functions in the generalized linear model setting. In the agnostic PAC learning setting, where no assumption on the relationship between the labels $y$ and the input $x$ is made, if the optimal population risk is $\mathsf{OPT}$, we show that gradient descent achieves population risk $O(\mathsf{OPT})+ε$ in polynomial time and sample complexity when $σ$ is strictly increasing. For the ReLU activation, our population risk guarantee is $O(\mathsf{OPT}^{1/2})+ε$. When labels take the form $y = σ(v^\top x) + ξ$ for zero-mean sub-Gaussian noise $ξ$, we show that the population risk guarantees for gradient descent improve to $\mathsf{OPT} + ε$. Our sample complexity and runtime guarantees are (almost) dimension independent, and when $σ$ is strictly increasing, require no distributional assumptions beyond boundedness. For ReLU, we show the same results under a nondegeneracy assumption for the marginal distribution of the input.