论文标题
贝叶斯神经网络是否需要完全随机?
Do Bayesian Neural Networks Need To Be Fully Stochastic?
论文作者
论文摘要
我们研究了贝叶斯神经网络中所有参数的好处,并发现了令人信服的理论和经验证据,表明这种标准结构可能是不必要的。为此,我们证明表达性预测分布仅需要少量的随机性。特别是,对于$ n $ n $维的预测问题,只有$ n $随机偏见的部分随机网络是通用的概率预测指标。在实证研究中,我们发现在四种不同的推论方式和八个数据集中没有完全随机性的系统性好处。尽管记忆成本降低,但部分随机网络仍可以匹配,有时甚至超过完全随机网络。
We investigate the benefit of treating all the parameters in a Bayesian neural network stochastically and find compelling theoretical and empirical evidence that this standard construction may be unnecessary. To this end, we prove that expressive predictive distributions require only small amounts of stochasticity. In particular, partially stochastic networks with only $n$ stochastic biases are universal probabilistic predictors for $n$-dimensional predictive problems. In empirical investigations, we find no systematic benefit of full stochasticity across four different inference modalities and eight datasets; partially stochastic networks can match and sometimes even outperform fully stochastic networks, despite their reduced memory costs.