论文标题
Gemnet-OC:开发用于大型和多样性分子模拟数据集的图形神经网络
GemNet-OC: Developing Graph Neural Networks for Large and Diverse Molecular Simulation Datasets
论文作者
论文摘要
近年来,分子模拟数据集的出现是大数量级,更多样化的阶。这些新数据集在复杂性的四个方面有很大差异:1。化学多样性(不同元素的数量),2。系统大小(每个样品原子数),3。数据集大小(数据样本的数量)和4个。域移动(培训和测试集的相似性)。尽管存在这些较大的差异,但小型和狭窄数据集上的基准仍然是分子模拟的图形神经网络(GNN)进展的主要方法,这可能是由于较便宜的训练计算要求所致。这就提出了一个问题 - GNN在小和狭窄的数据集上的进展是否转化为这些更复杂的数据集?这项工作通过首先根据大型开放催化剂2020(OC20)数据集开发Gemnet-OC模型来研究这个问题。 Gemnet-OC的表现优于OC20上的先前最新ART,而将训练时间减少了10倍。然后,我们比较了18个模型组件和超参数选择对多个数据集中性能的影响。我们发现,根据用于做出模型选择的数据集,所得模型将大不相同。为了隔离这种差异的来源,我们研究了OC20数据集的六个子集,这些子集分别测试了上述四个数据集方面的每个数据集。我们发现,OC-2M子集的结果与完整的OC20数据集良好相关,同时训练大得多。我们的发现挑战了仅在小型数据集上开发GNN的常见实践,但通过适度大小的代表性数据集(例如OC-2M)以及Gemnet-oc等高效模型(例如Gemnet-oc),突出了实现快速开发周期和可推广结果的方法。我们的代码和预估计的模型权重是开源的。
Recent years have seen the advent of molecular simulation datasets that are orders of magnitude larger and more diverse. These new datasets differ substantially in four aspects of complexity: 1. Chemical diversity (number of different elements), 2. system size (number of atoms per sample), 3. dataset size (number of data samples), and 4. domain shift (similarity of the training and test set). Despite these large differences, benchmarks on small and narrow datasets remain the predominant method of demonstrating progress in graph neural networks (GNNs) for molecular simulation, likely due to cheaper training compute requirements. This raises the question -- does GNN progress on small and narrow datasets translate to these more complex datasets? This work investigates this question by first developing the GemNet-OC model based on the large Open Catalyst 2020 (OC20) dataset. GemNet-OC outperforms the previous state-of-the-art on OC20 by 16% while reducing training time by a factor of 10. We then compare the impact of 18 model components and hyperparameter choices on performance in multiple datasets. We find that the resulting model would be drastically different depending on the dataset used for making model choices. To isolate the source of this discrepancy we study six subsets of the OC20 dataset that individually test each of the above-mentioned four dataset aspects. We find that results on the OC-2M subset correlate well with the full OC20 dataset while being substantially cheaper to train on. Our findings challenge the common practice of developing GNNs solely on small datasets, but highlight ways of achieving fast development cycles and generalizable results via moderately-sized, representative datasets such as OC-2M and efficient models such as GemNet-OC. Our code and pretrained model weights are open-sourced.