论文标题
解释或不解释:一项关于自动驾驶汽车解释的必要性的研究
To Explain or Not to Explain: A Study on the Necessity of Explanations for Autonomous Vehicles
论文作者
论文摘要
在自主系统的背景下,可以解释的AI(例如自动驾驶汽车)引起了研究人员的广泛利益。最近的研究发现,为自动驾驶汽车的行为提供解释具有许多好处(例如,信任和接受的增加),但很少强调何时需要解释以及解释内容如何随驱动环境而变化。在这项工作中,我们调查人们需要哪些场景,以及对情况和驾驶员类型的关键解释程度如何变化。通过用户实验,我们要求参与者评估解释的必要性,并在不同情况下衡量其对自动驾驶汽车的信任的影响。此外,我们提出了一个自动驾驶说明数据集,其中包含第一人称解释和相关的1103视频剪辑的措施,从而增加了伯克利的Deep Dive Drive Goate DataSet。我们的研究表明,驾驶员类型和驾驶场景决定了是否需要解释。特别是,人们倾向于就近撞事件的必要性达成共识,但对普通或异常驾驶情况有不同的看法。
Explainable AI, in the context of autonomous systems, like self-driving cars, has drawn broad interests from researchers. Recent studies have found that providing explanations for autonomous vehicles' actions has many benefits (e.g., increased trust and acceptance), but put little emphasis on when an explanation is needed and how the content of explanation changes with driving context. In this work, we investigate which scenarios people need explanations and how the critical degree of explanation shifts with situations and driver types. Through a user experiment, we ask participants to evaluate how necessary an explanation is and measure the impact on their trust in self-driving cars in different contexts. Moreover, we present a self-driving explanation dataset with first-person explanations and associated measures of the necessity for 1103 video clips, augmenting the Berkeley Deep Drive Attention dataset. Our research reveals that driver types and driving scenarios dictate whether an explanation is necessary. In particular, people tend to agree on the necessity for near-crash events but hold different opinions on ordinary or anomalous driving situations.