论文标题
图形卷积神经网络的可扩展训练,以快速,准确地预测分子中的homo-lumo间隙
Scalable training of graph convolutional neural networks for fast and accurate predictions of HOMO-LUMO gap in molecules
论文作者
论文摘要
图形卷积神经网络(GCNN)是材料科学中流行的深度学习模型(DL)模型,可从分子结构的图表中预测材料特性。培训用于分子设计的准确而全面的GCNN替代物需要大规模的图形数据集,并且通常是耗时的过程。 GPU和分布计算的最新进展开辟了一条有效降低GCNN培训的计算成本的途径。但是,高性能计算(HPC)资源进行培训的有效利用需要同时优化大型数据管理和可扩展的随机批处理优化技术。在这项工作中,我们专注于在HPC系统上构建GCNN模型,以预测数百万分子的材料特性。我们使用内部库Hydragnn进行大规模GCNN培训,利用Pytorch的分布数据并行性。我们使用Adios(高性能数据管理框架)来有效存储和读取大分子图数据。我们在两个开源大规模图数据集上进行并行训练,以构建一个称为Homo-Lumo Gap的重要量子属性的GCNN预测指标。我们衡量在两个DOE超级计算机上的方法的可伸缩性,准确性和收敛性:橡树岭领导力计算设施(OLCF)的峰会超级计算机和国家能源研究科学计算中心(NERSC)的Perlmutter System。我们通过Hydragn进行了实验结果,显示i)与常规方法相比,将数据加载时间降低了4.2倍,而II)线性缩放性能在峰会和Perlmutter上均高达1,024 GPU。
Graph Convolutional Neural Network (GCNN) is a popular class of deep learning (DL) models in material science to predict material properties from the graph representation of molecular structures. Training an accurate and comprehensive GCNN surrogate for molecular design requires large-scale graph datasets and is usually a time-consuming process. Recent advances in GPUs and distributed computing open a path to reduce the computational cost for GCNN training effectively. However, efficient utilization of high performance computing (HPC) resources for training requires simultaneously optimizing large-scale data management and scalable stochastic batched optimization techniques. In this work, we focus on building GCNN models on HPC systems to predict material properties of millions of molecules. We use HydraGNN, our in-house library for large-scale GCNN training, leveraging distributed data parallelism in PyTorch. We use ADIOS, a high-performance data management framework for efficient storage and reading of large molecular graph data. We perform parallel training on two open-source large-scale graph datasets to build a GCNN predictor for an important quantum property known as the HOMO-LUMO gap. We measure the scalability, accuracy, and convergence of our approach on two DOE supercomputers: the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) and the Perlmutter system at the National Energy Research Scientific Computing Center (NERSC). We present our experimental results with HydraGNN showing i) reduction of data loading time up to 4.2 times compared with a conventional method and ii) linear scaling performance for training up to 1,024 GPUs on both Summit and Perlmutter.