论文标题
质量多样性优化问题的集合来自机器学习模型的超参数优化
A Collection of Quality Diversity Optimization Problems Derived from Hyperparameter Optimization of Machine Learning Models
论文作者
论文摘要
质量多样性优化的目的是为当前的问题生成各种各样但高性能的解决方案。例如,典型的基准问题是找到机器人臂配置的曲目或游戏策略的集合。在本文中,我们提出了一系列质量多样性优化问题,以解决机器学习模型的超参数优化 - 迄今为止迄今尚未实现质量多样性优化的应用。我们的基准问题涉及新颖的功能,例如可解释性或模型的资源使用。为了允许快速有效的基准测试,我们建立在Yahpo Gym上,Yahpo Gym是一个最近提出的用于超参数优化的开源基准测试套件,可利用高性能的替代模型,并返回这些替代模型预测,而不是评估真正昂贵的黑匣子功能。我们提出了一项初步实验研究的结果,该研究将不同质量多样性优化剂在基准问题上进行了比较。此外,我们在超参数优化的背景下讨论了质量多样性优化的未来方向和挑战。
The goal of Quality Diversity Optimization is to generate a collection of diverse yet high-performing solutions to a given problem at hand. Typical benchmark problems are, for example, finding a repertoire of robot arm configurations or a collection of game playing strategies. In this paper, we propose a set of Quality Diversity Optimization problems that tackle hyperparameter optimization of machine learning models - a so far underexplored application of Quality Diversity Optimization. Our benchmark problems involve novel feature functions, such as interpretability or resource usage of models. To allow for fast and efficient benchmarking, we build upon YAHPO Gym, a recently proposed open source benchmarking suite for hyperparameter optimization that makes use of high performing surrogate models and returns these surrogate model predictions instead of evaluating the true expensive black box function. We present results of an initial experimental study comparing different Quality Diversity optimizers on our benchmark problems. Furthermore, we discuss future directions and challenges of Quality Diversity Optimization in the context of hyperparameter optimization.