论文标题
对强化学习的对抗性攻击基于扩展范围的电动送货车辆的能量管理系统
Adversarial Attacks on Reinforcement Learning based Energy Management Systems of Extended Range Electric Delivery Vehicles
论文作者
论文摘要
首先在计算机视觉领域进行了对抗性示例:通过在原始输入图像中添加一些精心设计的“噪声”,可以使人类与原始图像区分开的扰动图像可以轻松欺骗训练有素的分类器。近年来,研究人员还证明,对抗性示例可能会使用具有相似方法的图像输入来误导对视频游戏进行视频游戏的深度强化学习(DRL)。但是,尽管DRL在智能运输系统领域越来越受欢迎,但很少有研究研究对抗攻击对它们的影响,尤其是对于不以图像为输入的算法。在这项工作中,我们研究了几种快速方法,以生成对抗性示例,以显着降低训练有素的DRL基于扩展范围电动输送车辆的能源管理系统的性能。扰动的输入是低维状态表示,并且接近通过不同种类规范量化的原始输入。我们的工作表明,要将DRL代理应用于现实世界的运输系统上,应仔细考虑以网络攻击形式的对抗示例,尤其是对于可能导致严重安全问题的应用。
Adversarial examples are firstly investigated in the area of computer vision: by adding some carefully designed ''noise'' to the original input image, the perturbed image that cannot be distinguished from the original one by human, can fool a well-trained classifier easily. In recent years, researchers also demonstrated that adversarial examples can mislead deep reinforcement learning (DRL) agents on playing video games using image inputs with similar methods. However, although DRL has been more and more popular in the area of intelligent transportation systems, there is little research investigating the impacts of adversarial attacks on them, especially for algorithms that do not take images as inputs. In this work, we investigated several fast methods to generate adversarial examples to significantly degrade the performance of a well-trained DRL- based energy management system of an extended range electric delivery vehicle. The perturbed inputs are low-dimensional state representations and close to the original inputs quantified by different kinds of norms. Our work shows that, to apply DRL agents on real-world transportation systems, adversarial examples in the form of cyber-attack should be considered carefully, especially for applications that may lead to serious safety issues.