论文标题

改进了对数据中毒的认证防御措施(确定性)有限汇总

Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation

论文作者

Wang, Wenxiao, Levine, Alexander, Feizi, Soheil

论文摘要

数据中毒攻击旨在通过扭曲培训数据来操纵模型行为。以前,提出了基于聚合的认证防御Deep分区聚合(DPA),以减轻这种威胁。 DPA通过在数据不相交子集对基础分类器的聚合中进行预测,从而限制了其对数据集畸变的敏感性。在这项工作中,我们提出了对一般中毒攻击的经过改进的认证辩护,即有限的聚集。与直接将训练设置为不相交的子集的DPA相反,我们的方法首先将训练设置分为较小的不相交子集,然后将它们的重复项结合在一起,以构建较大(但不是不相关的)子集来用于训练基础分类器。这减少了毒药样本的最严重影响,从而改善了认证的鲁棒性界限。此外,我们还提供了我们方法的替代视图,桥接了确定性和基于随机聚合的认证防御的设计。从经验上讲,我们提出的有限汇总始终提高MNIST,CIFAR-10和GTSRB的证书,将认证的分数分别提高了3.05%,3.87%和4.77%,同时保持与DPA相同的清洁准确性,实际上在(有效地均可对Art of Art(Point-point-possinwisce)进行数据验证的数据有毒。

Data poisoning attacks aim at manipulating model behaviors through distorting training data. Previously, an aggregation-based certified defense, Deep Partition Aggregation (DPA), was proposed to mitigate this threat. DPA predicts through an aggregation of base classifiers trained on disjoint subsets of data, thus restricting its sensitivity to dataset distortions. In this work, we propose an improved certified defense against general poisoning attacks, namely Finite Aggregation. In contrast to DPA, which directly splits the training set into disjoint subsets, our method first splits the training set into smaller disjoint subsets and then combines duplicates of them to build larger (but not disjoint) subsets for training base classifiers. This reduces the worst-case impacts of poison samples and thus improves certified robustness bounds. In addition, we offer an alternative view of our method, bridging the designs of deterministic and stochastic aggregation-based certified defenses. Empirically, our proposed Finite Aggregation consistently improves certificates on MNIST, CIFAR-10, and GTSRB, boosting certified fractions by up to 3.05%, 3.87% and 4.77%, respectively, while keeping the same clean accuracies as DPA's, effectively establishing a new state of the art in (pointwise) certified robustness against data poisoning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源