论文标题

神经数据分析中的贝叶斯插值

Bayesian interpolation for power laws in neural data analysis

论文作者

Davidovich, Iván A., Roudi, Yasser

论文摘要

权力定律出现在各种现象中,从进行相过渡的物质到英语单词频率的分布。通常,只有在数据丰富的情况下,它们的存在才是显而易见的,并且准确地确定其指数通常需要更多的数据。随着神经科学中的记录量表变得更大,越来越多的研究试图表征神经数据中潜在的幂律关系。在本文中,我们旨在讨论一个人在这种努力中面临的潜在陷阱,并为此目的促进贝叶斯插值框架。我们将此框架应用于合成数据和数据中对小鼠初级视觉皮层(V1)中大规模记录的最新研究的数据,在该数据中,数据缩放的指数指数起着重要作用:它的价值是为了确定人群的刺激 - 反应关系是否平稳,并且提供了实验性数据以确认确实如此。我们的分析表明,在此处考虑的此类数据类型和大小的情况下,为幂律参数找到的最佳拟合值以及这些估计值的不确定性在很大程度上取决于假定的估计噪声模型,所选择的数据范围,以及(所有其他事项)(与所有其他事物相等)。因此,提供有关权力法指数的可靠声明是一个挑战。然而,我们的分析表明,这不会影响人口对低维刺激的平稳性的结论,而是对这些刺激的平滑性,而是对这些刺激的质疑对自然图像的怀疑。我们讨论了该结果对V1中神经代码的含义,并提供此处讨论的方法作为一个框架,即未来的研究,也许探索较大的数据范围,可以用作其起点来检查神经数据中的幂律量表。

Power laws arise in a variety of phenomena ranging from matter undergoing phase transition to the distribution of word frequencies in the English language. Usually, their presence is only apparent when data is abundant, and accurately determining their exponents often requires even larger amounts of data. As the scale of recordings in neuroscience becomes larger, an increasing number of studies attempt to characterise potential power-law relationships in neural data. In this paper, we aim to discuss the potential pitfalls that one faces in such efforts and to promote a Bayesian interpolation framework for this purpose. We apply this framework to synthetic data and to data from a recent study of large-scale recordings in mouse primary visual cortex (V1), where the exponent of a power-law scaling in the data played an important role: its value was argued to determine whether the population's stimulus-response relationship is smooth, and experimental data was provided to confirm that this is indeed so. Our analysis shows that with such data types and sizes as we consider here, the best-fit values found for the parameters of the power law and the uncertainty for these estimates are heavily dependent on the noise model assumed for the estimation, the range of the data chosen, and (with all other things being equal) the particular recordings. It is thus challenging to offer a reliable statement about the exponents of the power law. Our analysis, however, shows that this does not affect the conclusions regarding the smoothness of the population response to low-dimensional stimuli but casts doubt on those to natural images. We discuss the implications of this result for the neural code in the V1 and offer the approach discussed here as a framework that future studies, perhaps exploring larger ranges of data, can employ as their starting point to examine power-law scalings in neural data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源