论文标题
分散SGD的拓扑意识到的概括
Topology-aware Generalization of Decentralized SGD
论文作者
论文摘要
本文研究了分散的随机梯度下降(D-SGD)的算法稳定性和概括性。 We prove that the consensus model learned by D-SGD is $\mathcal{O}{(N^{-1}+m^{-1} +λ^2)}$-stable in expectation in the non-convex non-smooth setting, where $N$ is the total sample size, $m$ is the worker number, and $1+λ$ is the spectral gap that measures the connectivity of the communication topology.这些结果然后提供了$ \ Mathcal {O} {(N^{ - (1+α)/2}+m^{ - (1+α)/2}+λ^{1+α}+α}+ϕ _ {\ not-non-ny-ny-van $ non-ny-vacous $ non-vacous $ coint $ coy正如现有文献所建议的有关D-SGD的预计版本所建议的。我们的理论表明,D-SGD的普遍性与光谱差距呈正相关,并且可以解释为什么在初始训练阶段中共识控制可以确保更好的概括。 CIFAR-10,CIFAR-100和TININ-IMAGENET上VGG-11和RESNET-18的实验证明了我们的理论。据我们所知,这是关于香草D-SGD拓扑概括的第一部作品。代码可在https://github.com/raiden-zhu/generalization-of-dsgd上找到。
This paper studies the algorithmic stability and generalizability of decentralized stochastic gradient descent (D-SGD). We prove that the consensus model learned by D-SGD is $\mathcal{O}{(N^{-1}+m^{-1} +λ^2)}$-stable in expectation in the non-convex non-smooth setting, where $N$ is the total sample size, $m$ is the worker number, and $1+λ$ is the spectral gap that measures the connectivity of the communication topology. These results then deliver an $\mathcal{O}{(N^{-(1+α)/2}+ m^{-(1+α)/2}+λ^{1+α} + ϕ_{\mathcal{S}})}$ in-average generalization bound, which is non-vacuous even when $λ$ is closed to $1$, in contrast to vacuous as suggested by existing literature on the projected version of D-SGD. Our theory indicates that the generalizability of D-SGD is positively correlated with the spectral gap, and can explain why consensus control in initial training phase can ensure better generalization. Experiments of VGG-11 and ResNet-18 on CIFAR-10, CIFAR-100 and Tiny-ImageNet justify our theory. To our best knowledge, this is the first work on the topology-aware generalization of vanilla D-SGD. Code is available at https://github.com/Raiden-Zhu/Generalization-of-DSGD.