论文标题

更少的更好:恢复预期的特征子空间以鲁棒化NLU模型

Less is Better: Recovering Intended-Feature Subspace to Robustify NLU Models

论文作者

Wu, Ting, Gui, Tao

论文摘要

具有大量偏见的数据集当前威胁要培训NLU任务的值得信赖的模型。尽管取得了巨大进展,但当前的偏见方法却过分依赖偏见属性的知识。但是,属性的​​定义是难以捉摸的,并且在不同的数据集上有所不同。此外,在输入级别上利用这些属性到偏差缓解可能会留下内在属性和基本决策规则之间的差距。为了缩小这一差距并解放了偏见的监督,我们建议将偏置缓解范围扩展到特征空间。因此,开发了一种新型模型,即恢复具有无知识(风险)的预期功能子空间。假设由各种偏见引起的快捷键特征是为了预测而没有意外的,则风险将其视为冗余特征。当探究较低的歧管以删除冗余时,风险表明,具有预期功能的极低维度子空间可以鲁棒代表高度偏见的数据集。经验结果表明,我们的模型可以始终如一地改善对分布式集合的模型概括,并实现新的最新性能。

Datasets with significant proportions of bias present threats for training a trustworthy model on NLU tasks. Despite yielding great progress, current debiasing methods impose excessive reliance on the knowledge of bias attributes. Definition of the attributes, however, is elusive and varies across different datasets. Furthermore, leveraging these attributes at input level to bias mitigation may leave a gap between intrinsic properties and the underlying decision rule. To narrow down this gap and liberate the supervision on bias, we suggest extending bias mitigation into feature space. Therefore, a novel model, Recovering Intended-Feature Subspace with Knowledge-Free (RISK) is developed. Assuming that shortcut features caused by various biases are unintended for prediction, RISK views them as redundant features. When delving into a lower manifold to remove redundancies, RISK reveals that an extremely low-dimensional subspace with intended features can robustly represent the highly biased dataset. Empirical results demonstrate our model can consistently improve model generalization to out-of-distribution set, and achieves a new state-of-the-art performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源