论文标题

上下文限制使神经语言模型更像人性化

Context Limitations Make Neural Language Models More Human-Like

论文作者

Kuribayashi, Tatsuki, Oseki, Yohei, Brassard, Ana, Inui, Kentaro

论文摘要

语言模型(LMS)已用于认知建模和工程研究中 - 它们计算信息理论复杂性指标,这些指标模拟了人类在阅读过程中的认知负载。这项研究强调了现代神经LMS作为为此目的的选择模型的局限性:它们的上下文访问能力与人类的访问能力之间存在差异。我们的结果表明,限制LMS上下文的访问改善了他们对人类阅读行为的模拟。我们还表明,上下文访问中的LM人类差距与特定的句法结构有关。将句法偏见纳入LMS的环境访问可能会增强其认知合理性。

Language models (LMs) have been used in cognitive modeling as well as engineering studies -- they compute information-theoretic complexity metrics that simulate humans' cognitive load during reading. This study highlights a limitation of modern neural LMs as the model of choice for this purpose: there is a discrepancy between their context access capacities and that of humans. Our results showed that constraining the LMs' context access improved their simulation of human reading behavior. We also showed that LM-human gaps in context access were associated with specific syntactic constructions; incorporating syntactic biases into LMs' context access might enhance their cognitive plausibility.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源