论文标题

可解释的管道,具有进化优化的模块,用于带视觉输入的RL任务

Interpretable pipelines with evolutionarily optimized modules for RL tasks with visual inputs

论文作者

Custode, Leonardo Lucio, Iacca, Giovanni

论文摘要

AI中解释性的重要性已成为一个紧迫的问题,最近提出了几种可解释的AI(XAI)方法。但是,大多数可用的XAI技术是事后方法,但是它们可能仅部分可靠,因为它们并不能准确反映原始模型的状态。因此,实现XAI的更直接方法是通过可解释的(也称为玻璃盒)模型。这些模型已被证明可以在各种任务(例如分类和强化学习)中相对于黑盒模型获得可比的(并且在某些情况下更好)的性能。但是,在处理原始数据时,他们挣扎了,尤其是当输入维度增加并且仅原始输入并不能对决策过程提供宝贵的见解。在这里,我们建议使用由多种可解释模型组成的端到端管道合作地通过进化算法进行了优化,这使我们能够将决策过程分解为两个部分:从原始数据中计算高级特征,以及对提取的高级特征进行推理。我们从ATARI基准测试了我们在增强学习环境中的方法,在没有随机框架滑行的设置中,我们可以在其中获得可比的结果(相对于黑框方法),而性能在框架滑动设置中降低。

The importance of explainability in AI has become a pressing concern, for which several explainable AI (XAI) approaches have been recently proposed. However, most of the available XAI techniques are post-hoc methods, which however may be only partially reliable, as they do not reflect exactly the state of the original models. Thus, a more direct way for achieving XAI is through interpretable (also called glass-box) models. These models have been shown to obtain comparable (and, in some cases, better) performance with respect to black-boxes models in various tasks such as classification and reinforcement learning. However, they struggle when working with raw data, especially when the input dimensionality increases and the raw inputs alone do not give valuable insights on the decision-making process. Here, we propose to use end-to-end pipelines composed of multiple interpretable models co-optimized by means of evolutionary algorithms, that allows us to decompose the decision-making process into two parts: computing high-level features from raw data, and reasoning on the extracted high-level features. We test our approach in reinforcement learning environments from the Atari benchmark, where we obtain comparable results (with respect to black-box approaches) in settings without stochastic frame-skipping, while performance degrades in frame-skipping settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源