论文标题
大数据的可忍用轻度:在科学机器学习中迈向大规模的公共数据集
The Bearable Lightness of Big Data: Towards Massive Public Datasets in Scientific Machine Learning
论文作者
论文摘要
通常,大型数据集使深度学习模型能够以良好的准确性和可推广性能。然而,大规模的高保真模拟数据集(来自分子化学,天体物理学,计算流体动力学(CFD)等,由于维度和存储限制而导致策划的挑战性。有损的压缩算法可以帮助减轻存储的限制,只要在整体数据中,我们就可以在该点上进行训练。 CFD模拟在语义分割问题中引入的错误是鲁棒性的,我们的结果表明,有损的压缩算法为公开高保真的科学数据提供了一个现实的途径,以开放式数据存储库来建立本文,并在本文中构建了大型数据,并将https://blastnet.github.io/,用于科学机器学习。
In general, large datasets enable deep learning models to perform with good accuracy and generalizability. However, massive high-fidelity simulation datasets (from molecular chemistry, astrophysics, computational fluid dynamics (CFD), etc. can be challenging to curate due to dimensionality and storage constraints. Lossy compression algorithms can help mitigate limitations from storage, as long as the overall data fidelity is preserved. To illustrate this point, we demonstrate that deep learning models, trained and tested on data from a petascale CFD simulation, are robust to errors introduced during lossy compression in a semantic segmentation problem. Our results demonstrate that lossy compression algorithms offer a realistic pathway for exposing high-fidelity scientific data to open-source data repositories for building community datasets. In this paper, we outline, construct, and evaluate the requirements for establishing a big data framework, demonstrated at https://blastnet.github.io/, for scientific machine learning.