论文标题

使用封闭式惊喜培养事件压缩

Fostering Event Compression using Gated Surprise

论文作者

Humaidan, Dania, Otte, Sebastian, Butz, Martin V.

论文摘要

我们的大脑会收到动态变化的感觉运动数据流。然而,我们认为一个相当有条理的世界,我们将其陷入并认为是事件。关于事件预测性认知的认知科学的计算理论表明,我们的大脑通过将感觉运动数据分割成适当的上下文经验的块来形成生成事件的预测模型。在这里,我们介绍了一个层次结构,令人惊讶的复发性神经网络体系结构,该架构对此过程进行建模并开发出不同事件式上下文的紧凑压缩。该体系结构包含一个上下文的LSTM层,该层开发了正在进行的和后续上下文的生成压缩。这些压缩被传递到类似GRU的层中,该层使用惊喜信号来更新其经常性潜在状态。潜在状态将传递到另一个LSTM层,该层是根据提供的潜在的,上下文压缩信号来处理实际动态感觉流。我们的模型表明,要开发不同的事件压缩,并在多个事件处理任务上实现最佳性能。该体系结构对于进一步发展资源有效学习,基于层次模型的增强学习以及人工事件预测性认知和智能的发展可能非常有用。

Our brain receives a dynamically changing stream of sensorimotor data. Yet, we perceive a rather organized world, which we segment into and perceive as events. Computational theories of cognitive science on event-predictive cognition suggest that our brain forms generative, event-predictive models by segmenting sensorimotor data into suitable chunks of contextual experiences. Here, we introduce a hierarchical, surprise-gated recurrent neural network architecture, which models this process and develops compact compressions of distinct event-like contexts. The architecture contains a contextual LSTM layer, which develops generative compressions of ongoing and subsequent contexts. These compressions are passed into a GRU-like layer, which uses surprise signals to update its recurrent latent state. The latent state is passed forward into another LSTM layer, which processes actual dynamic sensory flow in the light of the provided latent, contextual compression signals. Our model shows to develop distinct event compressions and achieves the best performance on multiple event processing tasks. The architecture may be very useful for the further development of resource-efficient learning, hierarchical model-based reinforcement learning, as well as the development of artificial event-predictive cognition and intelligence.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源