论文标题

LightAmr格式标准和无损压缩算法的自适应网格精炼网格:Ramses用例

LightAMR format standard and lossless compression algorithms for adaptive mesh refinement grids: RAMSES use case

论文作者

Strafella, Loïc, Chapon, Damien

论文摘要

平行I/O库以及“在运输中”和“原位”可视化和分析等新概念的演变已被确定为在爆炸前应用程序中规避I/O瓶颈的关键技术。然而,还可以改善数据结构和数据格式,以减少I/O量并改善数据生产者和数据消费者之间的数据互操作性。在本文中,我们提出了一个非常轻巧且特定于特定目的的后处理数据模型,称为LightAmr。基于此数据模型,我们引入了一种树木修剪算法,该算法可从完全螺纹的AMR OCTREE中删除数据冗余。此外,我们提出了两种无损压缩算法,一种用于AMR网格结构描述,另一种用于AMR双/单个精确物理量标量字段。然后,我们在该新的LightAMR数据模型以及修剪和压缩算法的Ramses模拟数据集上介绍性能基准。我们表明,我们的修剪算法可以将RAMSES AMR数据集中的单元格总数减少10-40%,而不会丢失信息。最后,我们表明,Ramses AMR网格结构可以通过〜3个数量级压实,并且浮点标量字段可以通过〜1.2的因子压缩,而双精度为〜1.3-1.5,单个精度为〜1 gb/s的压缩速度。

The evolution of parallel I/O library as well as new concepts such as 'in transit' and 'in situ' visualization and analysis have been identified as key technologies to circumvent I/O bottleneck in pre-exascale applications. Nevertheless, data structure and data format can also be improved for both reducing I/O volume and improving data interoperability between data producer and data consumer. In this paper, we propose a very lightweight and purpose-specific post-processing data model for AMR meshes, called lightAMR. Based on this data model, we introduce a tree pruning algorithm that removes data redundancy from a fully threaded AMR octree. In addition, we present two lossless compression algorithms, one for the AMR grid structure description and one for AMR double/single precision physical quantity scalar fields. Then we present performance benchmarks on RAMSES simulation datasets of this new lightAMR data model and the pruning and compression algorithms. We show that our pruning algorithm can reduce the total number of cells from RAMSES AMR datasets by 10-40% without loss of information. Finally, we show that the RAMSES AMR grid structure can be compacted by ~ 3 orders of magnitude and the float scalar fields can be compressed by a factor ~ 1.2 for double precision and ~ 1.3 - 1.5 in single precision with a compression speed of ~ 1 GB/s.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源