论文标题

语言解释性工具:NLP模型的可扩展,交互式可视化和分析

The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models

论文作者

Tenney, Ian, Wexler, James, Bastings, Jasmijn, Bolukbasi, Tolga, Coenen, Andy, Gehrmann, Sebastian, Jiang, Ellen, Pushkarna, Mahima, Radebaugh, Carey, Reif, Emily, Yuan, Ann

论文摘要

我们介绍了语言可解释性工具(LIT),这是一个可视化和理解NLP模型的开源平台。我们关注有关模型行为的核心问题:为什么我的模型做出了这一预测?什么时候表现不佳?在输入的受控更改下会发生什么? LIT将局部解释,汇总分析和反事实生成整合到基于浏览器的简化界面中,以实现快速探索和错误分析。我们包括针对各种工作流程的案例研究,包括探索对情感分析的反事实,衡量核心系统中的性别偏见以及探索文本生成中的本地行为。 LIT支持广泛的模型,包括分类,SEQ2SEQ和结构化预测,并且通过声明性的,框架 - 不合Snostic的API高度扩展。 LIT正在积极开发中,并在https://github.com/pair-code/lit上提供代码和完整文档。

We present the Language Interpretability Tool (LIT), an open-source platform for visualization and understanding of NLP models. We focus on core questions about model behavior: Why did my model make this prediction? When does it perform poorly? What happens under a controlled change in the input? LIT integrates local explanations, aggregate analysis, and counterfactual generation into a streamlined, browser-based interface to enable rapid exploration and error analysis. We include case studies for a diverse set of workflows, including exploring counterfactuals for sentiment analysis, measuring gender bias in coreference systems, and exploring local behavior in text generation. LIT supports a wide range of models--including classification, seq2seq, and structured prediction--and is highly extensible through a declarative, framework-agnostic API. LIT is under active development, with code and full documentation available at https://github.com/pair-code/lit.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源