论文标题

进化架构搜索图形神经网络

Evolutionary Architecture Search for Graph Neural Networks

论文作者

Shi, Min, Wilson, David A., Zhu, Xingquan, Huang, Yu, Zhuang, Yuan, Liu, Jianxun, Tang, Yufei

论文摘要

在过去的十年中,自动化的机器学习(AUTOML)对深度学习的繁荣引起了人们的兴趣。特别是,神经体系结构搜索(NAS)在整个汽车研究社区中都引起了极大的关注,并推动了许多神经模型中的最先进,以解决类似网格的数据,例如文本和图像。但是,关于图形神经网络(GNN)在非结构化网络数据上的学习已经完成了非常垃圾的工作。考虑到诸如聚合器和激活函数之类的组件的大量选择和组合,确定通常需要的特定问题的合适的GNN结构,通常需要巨大的专家知识和费力的步道。此外,超级参数(例如学习率和辍学率)的轻微变化可能会极大地损害GNN的学习能力。在本文中,我们通过在涉及神经结构和学习参数的大型GNN结构空间中的单个模型的演变提出了一个新颖的自动框架。与其仅优化具有固定参数设置作为现有工作的模型结构,不如在GNN结构和学习参数之间执行交替的演变过程,以动态找到彼此的最佳拟合。据我们所知,这是第一个介绍和评估GNN模型进化架构搜索的工作。实验和验证表明,Evolutionary NAS能够匹配半监督的托管跨性脱节和归纳节点表示学习和分类的现有最新的强化学习方法。

Automated machine learning (AutoML) has seen a resurgence in interest with the boom of deep learning over the past decade. In particular, Neural Architecture Search (NAS) has seen significant attention throughout the AutoML research community, and has pushed forward the state-of-the-art in a number of neural models to address grid-like data such as texts and images. However, very litter work has been done about Graph Neural Networks (GNN) learning on unstructured network data. Given the huge number of choices and combinations of components such as aggregator and activation function, determining the suitable GNN structure for a specific problem normally necessitates tremendous expert knowledge and laborious trails. In addition, the slight variation of hyper parameters such as learning rate and dropout rate could dramatically hurt the learning capacity of GNN. In this paper, we propose a novel AutoML framework through the evolution of individual models in a large GNN architecture space involving both neural structures and learning parameters. Instead of optimizing only the model structures with fixed parameter settings as existing work, an alternating evolution process is performed between GNN structures and learning parameters to dynamically find the best fit of each other. To the best of our knowledge, this is the first work to introduce and evaluate evolutionary architecture search for GNN models. Experiments and validations demonstrate that evolutionary NAS is capable of matching existing state-of-the-art reinforcement learning approaches for both the semi-supervised transductive and inductive node representation learning and classification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源