论文标题

对基于强化学习的深入学习的比较研究,用于混合动力汽车的可转移能源管理策略

A Comparative Study of Deep Reinforcement Learning-based Transferable Energy Management Strategies for Hybrid Electric Vehicles

论文作者

Xu, Jingyi, Li, Zirui, Gao, Li, Ma, Junyi, Liu, Qi, Zhao, Yanan

论文摘要

基于深厚的学习能量管理策略(EMS)已成为混合动力汽车(HEVS)的有前途的解决方案。改变驾驶周期时,将重新训练神经网络,这是一项耗时且费力的任务。选择EMS的一种更有效的方法是将深度加强学习(DRL)与转移学习相结合,该学习可以将一个领域的知识转移到另一个新领域,从而使新领域的网络迅速达到收敛值。在这项工作的转移学习过程中,将DRL的不同探索方法(包括添加动作空间噪声和参数空间噪声)相互比较。结果表明,网络添加的参数空间噪声比其他网络更稳定和更快。总之,可转移EMS的最佳探索方法是在参数空间中添加噪声,而动作空间噪声和参数空间噪声的组合通常表现较差。我们的代码可在https://github.com/bit-xjy/rl基于-transferable-ems.git上找到。

The deep reinforcement learning-based energy management strategies (EMS) have become a promising solution for hybrid electric vehicles (HEVs). When driving cycles are changed, the neural network will be retrained, which is a time-consuming and laborious task. A more efficient way of choosing EMS is to combine deep reinforcement learning (DRL) with transfer learning, which can transfer knowledge of one domain to the other new domain, making the network of the new domain reach convergence values quickly. Different exploration methods of DRL, including adding action space noise and parameter space noise, are compared against each other in the transfer learning process in this work. Results indicate that the network added parameter space noise is more stable and faster convergent than the others. In conclusion, the best exploration method for transferable EMS is to add noise in the parameter space, while the combination of action space noise and parameter space noise generally performs poorly. Our code is available at https://github.com/BIT-XJY/RL-based-Transferable-EMS.git.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源