论文标题
超越贝叶斯 - 典型性:元学习你所知道的
Beyond Bayes-optimality: meta-learning what you know you don't know
论文作者
论文摘要
具有内存的元训练剂已显示出最终在贝叶斯(Bayes)最佳的药物中,它将贝叶斯(Bayes)的最佳性作为数值优化问题的隐式解决方案,而不是明确的建模假设。贝叶斯(Bayes)最佳的代理人是风险中立的,因为它们仅对预期的回报和模棱两可的中立表示,因为它们在新情况下起作用,好像已知不确定性一样。这与对风险敏感的药物相反,风险敏感的药物还利用了回报的高阶时刻和对歧义敏感的药物的效果,这些药物在认识到他们缺乏知识的情况时的作用有所不同。人们也众所周知,人类对歧义性厌恶,并且以不是贝叶斯最佳的方式对风险敏感,这表明这种敏感性可以赋予优势,尤其是在关键的情况下。我们如何扩展元学习方案以产生对风险和歧义敏感的药物?这项工作的目的是通过证明风险和歧义 - 敏感性也出现,因为使用修改的元训练算法进行优化问题,这也会填补这一差距,这是通过修改的元训练算法来操纵学习者的体验生成过程。我们在经验上测试了所提出的元训练算法对暴露于决策实验的基础类别的代理,并证明它们对风险和歧义敏感。
Meta-training agents with memory has been shown to culminate in Bayes-optimal agents, which casts Bayes-optimality as the implicit solution to a numerical optimization problem rather than an explicit modeling assumption. Bayes-optimal agents are risk-neutral, since they solely attune to the expected return, and ambiguity-neutral, since they act in new situations as if the uncertainty were known. This is in contrast to risk-sensitive agents, which additionally exploit the higher-order moments of the return, and ambiguity-sensitive agents, which act differently when recognizing situations in which they lack knowledge. Humans are also known to be averse to ambiguity and sensitive to risk in ways that aren't Bayes-optimal, indicating that such sensitivity can confer advantages, especially in safety-critical situations. How can we extend the meta-learning protocol to generate risk- and ambiguity-sensitive agents? The goal of this work is to fill this gap in the literature by showing that risk- and ambiguity-sensitivity also emerge as the result of an optimization problem using modified meta-training algorithms, which manipulate the experience-generation process of the learner. We empirically test our proposed meta-training algorithms on agents exposed to foundational classes of decision-making experiments and demonstrate that they become sensitive to risk and ambiguity.