论文标题
捕获机器人:使用对抗性示例来改善机器人攻击的验证码鲁棒性
Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks
论文作者
论文摘要
到目前为止,验证码一直是第一道防线,以防止(恶意)机器人对基于Web的服务未经授权的访问,同时还为人类访客提供无故障的体验。但是,文献中最近的工作提供了复杂机器人的证据,这些机器人利用机器学习中的进步(ML)轻松绕过现有的基于验证码的防御。在这项工作中,我们迈出了第一步来解决这个问题。我们介绍了Capture,这是一种基于对抗性示例的新型验证码方案。虽然通常使用对抗性示例将ML模型误入歧途,但我们试图“很好地利用”此类机制。我们的经验评估表明,捕获可以产生人类易于解决的验证码,同时有效地阻止了基于ML的机器人求解器。
To this date, CAPTCHAs have served as the first line of defense preventing unauthorized access by (malicious) bots to web-based services, while at the same time maintaining a trouble-free experience for human visitors. However, recent work in the literature has provided evidence of sophisticated bots that make use of advancements in machine learning (ML) to easily bypass existing CAPTCHA-based defenses. In this work, we take the first step to address this problem. We introduce CAPTURE, a novel CAPTCHA scheme based on adversarial examples. While typically adversarial examples are used to lead an ML model astray, with CAPTURE, we attempt to make a "good use" of such mechanisms. Our empirical evaluations show that CAPTURE can produce CAPTCHAs that are easy to solve by humans while at the same time, effectively thwarting ML-based bot solvers.