论文标题

解释用于检测连接自动驾驶汽车中欺骗攻击的雷达功能

Explaining RADAR features for detecting spoofing attacks in Connected Autonomous Vehicles

论文作者

Rastogi, Nidhi, Rampazzi, Sara, Clifford, Michael, Heller, Miriam, Bishop, Matthew, Levitt, Karl

论文摘要

预计连接的自动驾驶汽车(CAV)将具有内置的AI系统,用于防御网络攻击。机器学习(ML)模型构成了许多此类AI系统的基础。这些模型因其像黑匣子一样臭名昭著,以极高的准确性将输入转变为解决方案,但没有任何解释支持他们的决策。需要说明来传达模型绩效,使决策透明,并与利益相关者建立对模型的信任。解释还可以指示人类何时必须控制,例如,何时ML模型做出较低的信心决策或提供多个或模棱两可的替代方案。解释还提供了后期法医后分析的证据。对安全问题的可解释ML的研究是有限的,而与CAV有关。本文浮出水面的关键但研究不足的传感器数据\ textit {不确定性}问题,用于培训ML攻击检测模型,尤其是在高度移动和规避风险的平台(例如自动驾驶汽车)中。我们提出了一个模型,该模型在传感器输入中解释了\ textit {确定性}和\ textit {undnectity} - 数据收集中缺少的特征。我们假设对于给定系统而没有可解释的输入数据质量,模型的解释是不准确的。我们估计\ textIt {不确定性}和雷达传感器数据中特征的质量函数,并通过实验评估将它们纳入训练模型中。质量函数允许分类器准确地将所有欺骗输入分类为不正确的类标签。

Connected autonomous vehicles (CAVs) are anticipated to have built-in AI systems for defending against cyberattacks. Machine learning (ML) models form the basis of many such AI systems. These models are notorious for acting like black boxes, transforming inputs into solutions with great accuracy, but no explanations support their decisions. Explanations are needed to communicate model performance, make decisions transparent, and establish trust in the models with stakeholders. Explanations can also indicate when humans must take control, for instance, when the ML model makes low confidence decisions or offers multiple or ambiguous alternatives. Explanations also provide evidence for post-incident forensic analysis. Research on explainable ML to security problems is limited, and more so concerning CAVs. This paper surfaces a critical yet under-researched sensor data \textit{uncertainty} problem for training ML attack detection models, especially in highly mobile and risk-averse platforms such as autonomous vehicles. We present a model that explains \textit{certainty} and \textit{uncertainty} in sensor input -- a missing characteristic in data collection. We hypothesize that model explanation is inaccurate for a given system without explainable input data quality. We estimate \textit{uncertainty} and mass functions for features in radar sensor data and incorporate them into the training model through experimental evaluation. The mass function allows the classifier to categorize all spoofed inputs accurately with an incorrect class label.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源