论文标题

战略性的感知

The Strategic Perceptron

论文作者

Ahmadi, Saba, Beyhaghi, Hedyeh, Blum, Avrim, Naggita, Keziah

论文摘要

经典的感知算法为学习线性分类器提供了一个简单而优雅的过程。在每个步骤中,该算法都会观察样本的位置和标签,并在犯错时相应地更新当前预测器。但是,在渴望将其归类为正面并能够通过有限数量修改其位置的战略代理人的存在,分类器可能无法观察代理的真实位置,而是一个假装代理的位置。与原始设置具有完美的位置知识不同,在这种情况下,感知算法无法实现其保证,并且我们将永远在两个解决方案之间振荡的示例说明了示例,即使存在一个完美的大型宽线性线性分类器,也会造成无限的错误。我们的主要贡献是提供一种修改后的感知算法,该算法在有$ \ ell_2 $和加权$ \ ell_1 $操作成本的战略代理的存在下犯了一个有限的错误。在我们的基线模型中,假定对操纵成本的了解(即代理人可以操纵的程度)。在我们最通用的模型中,我们放宽了这个假设并提供了一种算法,该算法可以学习和完善分类器及其成本估算,即使操纵成本未知,也可以实现良好的错误界限。

The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample's position and label and updates the current predictor accordingly if it makes a mistake. However, in presence of strategic agents that desire to be classified as positive and that are able to modify their position by a limited amount, the classifier may not be able to observe the true position of agents but rather a position where the agent pretends to be. Unlike the original setting with perfect knowledge of positions, in this situation the Perceptron algorithm fails to achieve its guarantees, and we illustrate examples with the predictor oscillating between two solutions forever, making an unbounded number of mistakes even though a perfect large-margin linear classifier exists. Our main contribution is providing a modified Perceptron-style algorithm which makes a bounded number of mistakes in presence of strategic agents with both $\ell_2$ and weighted $\ell_1$ manipulation costs. In our baseline model, knowledge of the manipulation costs (i.e., the extent to which an agent may manipulate) is assumed. In our most general model, we relax this assumption and provide an algorithm which learns and refines both the classifier and its cost estimates to achieve good mistake bounds even when manipulation costs are unknown.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源