论文标题
人类对解释性的评估:AI生成的音乐知识的情况
Human Evaluation of Interpretability: The Case of AI-Generated Music Knowledge
论文作者
论文摘要
在人工智能(AI)和人类计算机互动(HCI)社区中,机器学习模型的可解释性在研究人员中引起了越来越多的关注。大多数现有的工作都集中在决策上,而我们考虑知识发现。特别是,我们专注于评估艺术和人文科学中的AI知识/规则。从特定的情况下,我们提出了一个实验程序,以收集和评估人类生成的AI生成的音乐理论/规则作为复杂的符号/数字对象的言语解释。我们的目标是揭示从AI来源解码表达信息的过程中的可能性和挑战。我们将其视为迈向人工解释的AI表示形式的更好的设计,以及评估AI发现知识表示的可解释性的一般方法。
Interpretability of machine learning models has gained more and more attention among researchers in the artificial intelligence (AI) and human-computer interaction (HCI) communities. Most existing work focuses on decision making, whereas we consider knowledge discovery. In particular, we focus on evaluating AI-discovered knowledge/rules in the arts and humanities. From a specific scenario, we present an experimental procedure to collect and assess human-generated verbal interpretations of AI-generated music theory/rules rendered as sophisticated symbolic/numeric objects. Our goal is to reveal both the possibilities and the challenges in such a process of decoding expressive messages from AI sources. We treat this as a first step towards 1) better design of AI representations that are human interpretable and 2) a general methodology to evaluate interpretability of AI-discovered knowledge representations.