论文标题

光滑的变分图嵌入,用于有效的神经体系结构搜索

Smooth Variational Graph Embeddings for Efficient Neural Architecture Search

论文作者

Lukasik, Jovita, Friede, David, Zela, Arber, Hutter, Frank, Keuper, Margret

论文摘要

神经体系结构搜索(NAS)最近从各个方向进行了解决,包括基于离散的,基于抽样的方法和有效的可区分方法。众所周知,虽然前者昂贵,但后者遭受了在搜索空间的强大限制。例如,通过基于图形神经网络的变异自动编码器从学到的嵌入空间进行的架构优化,建立了中间立场,并从双方利用优势。这种方法最近在几个基准上表现出良好的性能。然而,它们的稳定性和预测能力在很大程度上取决于它们从嵌入空间重建网络的能力。在本文中,我们提出了一个双面变异图自动编码器,该图允许从各个搜索空间平滑编码并准确地重建神经体系结构。我们评估了由ENAS方法,NAS-Bench-101和NAS Bench-2011搜索空间定义的神经体系结构的建议方法,并表明我们的平滑嵌入空间允许将性能预测直接推送到可见域以外的体系结构(例如,具有更多操作)。因此,即使没有昂贵的贝叶斯优化或强化学习,它也有助于预测良好的网络体系结构。

Neural architecture search (NAS) has recently been addressed from various directions, including discrete, sampling-based methods and efficient differentiable approaches. While the former are notoriously expensive, the latter suffer from imposing strong constraints on the search space. Architecture optimization from a learned embedding space for example through graph neural network based variational autoencoders builds a middle ground and leverages advantages from both sides. Such approaches have recently shown good performance on several benchmarks. Yet, their stability and predictive power heavily depends on their capacity to reconstruct networks from the embedding space. In this paper, we propose a two-sided variational graph autoencoder, which allows to smoothly encode and accurately reconstruct neural architectures from various search spaces. We evaluate the proposed approach on neural architectures defined by the ENAS approach, the NAS-Bench-101 and the NAS-Bench-201 search space and show that our smooth embedding space allows to directly extrapolate the performance prediction to architectures outside the seen domain (e.g. with more operations). Thus, it facilitates to predict good network architectures even without expensive Bayesian optimization or reinforcement learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源