论文标题

通过对抗训练提高喷气式标记算法的鲁棒性

Improving Robustness of Jet Tagging Algorithms with Adversarial Training

论文作者

Stein, Annika, Coubez, Xavier, Mondal, Spandan, Novak, Andrzej, Schmidt, Alexander

论文摘要

深度学习是高能物理学领域的标准工具,为众多分析策略促进了相当大的敏感性增强。特别是,在识别物理对象(例如喷气味标记)时,复杂的神经网络体系结构起着重要作用。但是,这些方法依赖于准确的模拟。不隔材料会导致需要测量和校准的数据性能的不可忽略的差异。我们研究了对输入数据的分类器响应,并使用注射的不satodelings进行了调查,并通过应用对抗性攻击来探测风味标记算法的脆弱性。随后,我们提出了一种对抗性训练策略,以减轻这种模拟攻击的影响并改善分类器的鲁棒性。我们研究了性能与脆弱性之间的关系,并表明该方法构成了一种有希望的方法,可以减少对差建模的脆弱性。

Deep learning is a standard tool in the field of high-energy physics, facilitating considerable sensitivity enhancements for numerous analysis strategies. In particular, in identification of physics objects, such as jet flavor tagging, complex neural network architectures play a major role. However, these methods are reliant on accurate simulations. Mismodeling can lead to non-negligible differences in performance in data that need to be measured and calibrated against. We investigate the classifier response to input data with injected mismodelings and probe the vulnerability of flavor tagging algorithms via application of adversarial attacks. Subsequently, we present an adversarial training strategy that mitigates the impact of such simulated attacks and improves the classifier robustness. We examine the relationship between performance and vulnerability and show that this method constitutes a promising approach to reduce the vulnerability to poor modeling.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源