论文标题
使用经验共享和重播,改善进化算法的智能
Improving Intelligence of Evolutionary Algorithms Using Experience Share and Replay
论文作者
论文摘要
我们提出了一种新的方法,一种新型方法,结合了粒子群优化(PSO),进化策略(ES)和模拟退火(SA),这是由杂化算法中的,灵感来自增强学习的启发。 PESA通过将其解决方案存储在共享的重播内存中,从而杂交了三种算法。接下来,PESA根据其适应性和优先级值的频繁形式将重新分配给重新分布,以重新分布数据,从而显着增强了样本多样性和算法探索。此外,贪婪的重放被隐式地在SA中使用,以改善接近进化结束的PESA剥削。针对12个高维连续基准函数的验证显示,在相似的初始起点,超参数和世代数量下,PESA对独立ES,PSO和SA的表现出色。与独立的对应物相比,PESA表现出更好的探索行为,更快的收敛性以及找到全局最佳功能的能力。鉴于有希望的性能,PESA可以提供有效的优化选项,尤其是在经过额外的多处理改进以处理复杂且昂贵的健身功能之后。
We propose PESA, a novel approach combining Particle Swarm Optimisation (PSO), Evolution Strategy (ES), and Simulated Annealing (SA) in a hybrid Algorithm, inspired from reinforcement learning. PESA hybridizes the three algorithms by storing their solutions in a shared replay memory. Next, PESA applies prioritized replay to redistribute data between the three algorithms in frequent form based on their fitness and priority values, which significantly enhances sample diversity and algorithm exploration. Additionally, greedy replay is used implicitly within SA to improve PESA exploitation close to the end of evolution. The validation against 12 high-dimensional continuous benchmark functions shows superior performance by PESA against standalone ES, PSO, and SA, under similar initial starting points, hyperparameters, and number of generations. PESA shows much better exploration behaviour, faster convergence, and ability to find the global optima compared to its standalone counterparts. Given the promising performance, PESA can offer an efficient optimisation option, especially after it goes through additional multiprocessing improvements to handle complex and expensive fitness functions.