论文标题
ADA3DIFF:通过自适应扩散防御3D对抗点云
Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive Diffusion
论文作者
论文摘要
深3D点云模型对对抗性攻击敏感,这对诸如自动驾驶等安全至关重要的应用构成了威胁。强大的训练和进行防御是捍卫对抗性扰动的典型策略。但是,它们要么诱导大规模的计算开销,要么严重依赖指定的先验,从而限制了对各种攻击的全身鲁棒性。为了解决这个问题,本文引入了一种新颖的失真感知防御框架,可以使用量身定制的强度估计器和扩散模型来重建原始数据分布。为了执行失真感知的正向扩散,我们设计了一种失真估计算法,该算法是通过将每个点到其局部相邻点的最合适平面的距离而获得的,该点基于观察到对抗点云的局部空间特性。通过迭代扩散和反向降解,可以将各种变形下的扰动点云恢复回干净的分布。这种方法可以有效地防御具有不同噪声预算的自适应攻击,从而增强了现有的3D深度识别模型的鲁棒性。
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving. Robust training and defend-by-denoising are typical strategies for defending adversarial perturbations. However, they either induce massive computational overhead or rely heavily upon specified priors, limiting generalized robustness against attacks of all kinds. To remedy it, this paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model. To perform distortion-aware forward diffusion, we design a distortion estimation algorithm that is obtained by summing the distance of each point to the best-fitting plane of its local neighboring points, which is based on the observation of the local spatial properties of the adversarial point cloud. By iterative diffusion and reverse denoising, the perturbed point cloud under various distortions can be restored back to a clean distribution. This approach enables effective defense against adaptive attacks with varying noise budgets, enhancing the robustness of existing 3D deep recognition models.