论文标题

卫星图像和深层生成模型的野火预测

Wildfire Forecasting with Satellite Images and Deep Generative Model

论文作者

Hoang, Thai-Nam, Truong, Sang, Schmidt, Chris

论文摘要

野火预测一直是人文学科蓬勃发展的最关键任务之一。它在保护人类生活中起着至关重要的作用。另一方面,由于其随机和混乱的特性,野火预测很难。我们通过将一系列野火图像解释为视频来解决问题,并用它来预测火灾将来的表现。但是,创建说明未来固有不确定性的视频预测模型是具有挑战性的。大部分已发布的尝试基于随机图像 - 自动回旋的复发网络,这会增加各种性能和应用困难,例如计算成本和大量数据集的效率有限。另一种可能性是使用结合框架合成和时间动力学的完全潜在的时间模型。但是,由于设计和培训问题,文献中尚未提出过随机视频预测的这种模型。本文通过引入一种新型的随机时间模型来解决这些问题,该模型的动态是在潜在空间中驱动的。它自然可以通过允许我们更轻巧,更容易解释的潜在模型来击败GOY-16数据集上的先前最先进的方法来预测视频动态。将对各种基准模型进行比较。

Wildfire forecasting has been one of the most critical tasks that humanities want to thrive. It plays a vital role in protecting human life. Wildfire prediction, on the other hand, is difficult because of its stochastic and chaotic properties. We tackled the problem by interpreting a series of wildfire images as a video and used it to anticipate how the fire would behave in the future. However, creating video prediction models that account for the inherent uncertainty of the future is challenging. The bulk of published attempts is based on stochastic image-autoregressive recurrent networks, which raises various performance and application difficulties, such as computational cost and limited efficiency on massive datasets. Another possibility is to use entirely latent temporal models that combine frame synthesis and temporal dynamics. However, due to design and training issues, no such model for stochastic video prediction has yet been proposed in the literature. This paper addresses these issues by introducing a novel stochastic temporal model whose dynamics are driven in a latent space. It naturally predicts video dynamics by allowing our lighter, more interpretable latent model to beat previous state-of-the-art approaches on the GOES-16 dataset. Results will be compared towards various benchmarking models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源