论文标题

XMD:NLP模型基于交互式解释的端到端框架

XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models

论文作者

Lee, Dong-Ho, Kadakia, Akshen, Joshi, Brihi, Chan, Aaron, Liu, Ziyi, Narahari, Kiran, Shibuya, Takashi, Mitani, Ryosuke, Sekiya, Toshiyuki, Pujara, Jay, Ren, Xiang

论文摘要

NLP模型容易学习在某些数据集上工作但无法正确反映基本任务的虚假偏见(即错误)。基于解释的模型调试旨在通过​​显示人类用户的解释模型行为的解释,要求用户对行为提供反馈,然后使用反馈来更新模型来解决虚假偏见。尽管现有的模型调试方法已显示出希望,但其原型级实现提供了有限的实用性。因此,我们提出XMD:基于解释的模型调试的第一个开源,端到端框架。给定的任务或实例级别的说明,用户可以通过直观的,基于Web的UI灵活地提供各种形式的反馈。接收用户反馈后,XMD通过正规化模型自动实时更新模型,以使其解释与用户反馈保持一致。然后,新模型可以通过拥抱面轻松地部署到现实世界应用程序中。使用XMD,我们可以将模型在文本分类任务上的OOD性能提高多达18%。

NLP models are susceptible to learning spurious biases (i.e., bugs) that work on some datasets but do not properly reflect the underlying task. Explanation-based model debugging aims to resolve spurious biases by showing human users explanations of model behavior, asking users to give feedback on the behavior, then using the feedback to update the model. While existing model debugging methods have shown promise, their prototype-level implementations provide limited practical utility. Thus, we propose XMD: the first open-source, end-to-end framework for explanation-based model debugging. Given task- or instance-level explanations, users can flexibly provide various forms of feedback via an intuitive, web-based UI. After receiving user feedback, XMD automatically updates the model in real time, by regularizing the model so that its explanations align with the user feedback. The new model can then be easily deployed into real-world applications via Hugging Face. Using XMD, we can improve the model's OOD performance on text classification tasks by up to 18%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源