论文标题
惩罚对大量扰动输入的自信预测并不能改善问题的分布概括
Penalizing Confident Predictions on Largely Perturbed Inputs Does Not Improve Out-of-Distribution Generalization in Question Answering
论文作者
论文摘要
问答(QA)模型对输入的大扰动不敏感。也就是说,即使在很大程度上给出了人类无法正确得出答案的扰动输入,它们也做出了正确和自信的预测。此外,QA模型未能推广到其他域和对抗测试集,而人类则保持很高的精度。基于这些观察结果,我们假设QA模型并不使用人类阅读所需的预期功能,而是依靠虚假特征,从而导致缺乏概括能力。因此,我们试图回答以下问题:如果对各种类型的扰动的质量检查模型的过度自信预测会受到惩罚,是否会改善分布(OOD)概括?为了防止模型对扰动输入进行自信的预测,我们首先遵循现有的研究,并最大程度地提高扰动输入的输出概率的熵。但是,我们发现受过训练的对某种扰动类型敏感的质量检查模型通常对看不见的扰动类型不敏感。因此,我们同时最大程度地提高了四种扰动类型的熵(即单词和句子级别的改组和删除),以进一步缩小模型和人之间的差距。与我们的期望相反,尽管模型对四种类型的扰动变得敏感,但我们发现OOD概括没有改善。此外,熵最大化后有时会降解OOD的概括。在很大程度上对投入的预测本身对获得人类的信任可能有益。但是,我们的负面结果表明,研究人员应注意熵最大化的副作用。
Question answering (QA) models are shown to be insensitive to large perturbations to inputs; that is, they make correct and confident predictions even when given largely perturbed inputs from which humans can not correctly derive answers. In addition, QA models fail to generalize to other domains and adversarial test sets, while humans maintain high accuracy. Based on these observations, we assume that QA models do not use intended features necessary for human reading but rely on spurious features, causing the lack of generalization ability. Therefore, we attempt to answer the question: If the overconfident predictions of QA models for various types of perturbations are penalized, will the out-of-distribution (OOD) generalization be improved? To prevent models from making confident predictions on perturbed inputs, we first follow existing studies and maximize the entropy of the output probability for perturbed inputs. However, we find that QA models trained to be sensitive to a certain perturbation type are often insensitive to unseen types of perturbations. Thus, we simultaneously maximize the entropy for the four perturbation types (i.e., word- and sentence-level shuffling and deletion) to further close the gap between models and humans. Contrary to our expectations, although models become sensitive to the four types of perturbations, we find that the OOD generalization is not improved. Moreover, the OOD generalization is sometimes degraded after entropy maximization. Making unconfident predictions on largely perturbed inputs per se may be beneficial to gaining human trust. However, our negative results suggest that researchers should pay attention to the side effect of entropy maximization.