论文标题
推荐系统中的文本解释和批评
Textual Explanations and Critiques in Recommendation Systems
论文作者
论文摘要
人工智能和机器学习算法已变得无处不在。尽管它们提供了广泛的好处,但他们在决策领域的采用受到了缺乏解释性的限制,尤其是文本数据。此外,随着比以往更多的数据,解释自动化预测变得越来越重要。 通常,用户发现很难理解基本的计算过程并与模型进行交互,尤其是当模型无法正确生成结果或解释时,或两者兼而有之时。这个问题强调了用户不断增长的需求,以更好地了解模型的内部运作并获得对其行为的控制。本文的重点是解决这一需求的两个基本挑战。第一个涉及解释生成:以可扩展和数据驱动的方式从文本文档中推断出高质量的解释。第二个挑战是使解释可行,我们将其称为批评。本论文研究了自然语言处理和建议任务中的两个重要应用。 总体而言,我们证明可解释性并非以两个相关应用中的性能降低为代价。我们的框架也适用于其他领域。该论文提出了一种有效的手段,即在人工智能中的承诺与实践之间差距。
Artificial intelligence and machine learning algorithms have become ubiquitous. Although they offer a wide range of benefits, their adoption in decision-critical fields is limited by their lack of interpretability, particularly with textual data. Moreover, with more data available than ever before, it has become increasingly important to explain automated predictions. Generally, users find it difficult to understand the underlying computational processes and interact with the models, especially when the models fail to generate the outcomes or explanations, or both, correctly. This problem highlights the growing need for users to better understand the models' inner workings and gain control over their actions. This dissertation focuses on two fundamental challenges of addressing this need. The first involves explanation generation: inferring high-quality explanations from text documents in a scalable and data-driven manner. The second challenge consists in making explanations actionable, and we refer to it as critiquing. This dissertation examines two important applications in natural language processing and recommendation tasks. Overall, we demonstrate that interpretability does not come at the cost of reduced performance in two consequential applications. Our framework is applicable to other fields as well. This dissertation presents an effective means of closing the gap between promise and practice in artificial intelligence.