论文标题
建模人类合理性对从多种反馈类型的学习奖励的影响
The Effect of Modeling Human Rationality Level on Learning Rewards from Multiple Feedback Types
论文作者
论文摘要
当从人类行为中推断出奖励功能(无论是演示,比较,物理校正或电子停靠点)时,它已被证明是对人类进行建模作为做出嘈杂的理性选择的有用的,并以“合理性系数”捕获我们期望在人类行为中看到多少噪声或熵。先前的工作通常将理性级别设置为恒定价值,无论人类反馈的类型或质量如何。但是,在许多情况下,给出一种反馈(例如演示)可能比其他类型的反馈要困难得多(例如,回答比较查询)。因此,我们希望根据人类反馈的类型看到更多或多或少的噪音。在这项工作中,我们提倡,将每个反馈类型的实际数据中的理性系数扎根,而不是假设默认值对奖励学习具有重大的积极影响。我们在模拟实验和具有实际人类反馈的用户研究中对此进行了测试。我们发现,高估人类理性可以对奖励学习准确性和遗憾产生可怕的影响。我们还发现,即使人类由于系统的偏见而显着偏离嘈杂的理性选择模型,将合理性系数符合人为数据也可以更好地奖励学习。此外,我们发现合理性层面会影响每种反馈类型的信息性:令人惊讶的是,示威并不总是最有用的 - 当人类的行为非常偏好时,即使在合理性水平相同的情况下,比较实际上变得更加有用。最终,我们的结果强调了关注假定的人类理性水平的重要性和优势,尤其是当代理人积极从多种类型的人类反馈中学习时。
When inferring reward functions from human behavior (be it demonstrations, comparisons, physical corrections, or e-stops), it has proven useful to model the human as making noisy-rational choices, with a "rationality coefficient" capturing how much noise or entropy we expect to see in the human behavior. Prior work typically sets the rationality level to a constant value, regardless of the type, or quality, of human feedback. However, in many settings, giving one type of feedback (e.g. a demonstration) may be much more difficult than a different type of feedback (e.g. answering a comparison query). Thus, we expect to see more or less noise depending on the type of human feedback. In this work, we advocate that grounding the rationality coefficient in real data for each feedback type, rather than assuming a default value, has a significant positive effect on reward learning. We test this in both simulated experiments and in a user study with real human feedback. We find that overestimating human rationality can have dire effects on reward learning accuracy and regret. We also find that fitting the rationality coefficient to human data enables better reward learning, even when the human deviates significantly from the noisy-rational choice model due to systematic biases. Further, we find that the rationality level affects the informativeness of each feedback type: surprisingly, demonstrations are not always the most informative -- when the human acts very suboptimally, comparisons actually become more informative, even when the rationality level is the same for both. Ultimately, our results emphasize the importance and advantage of paying attention to the assumed human-rationality level, especially when agents actively learn from multiple types of human feedback.