论文标题
DDDM:一个用于脑力启发的框架,用于鲁棒分类
DDDM: a Brain-Inspired Framework for Robust Classification
论文作者
论文摘要
尽管它们在各种现实世界中的表现出色,但深层人工神经网络对输入噪声敏感,尤其是对抗性扰动。相反,人类和动物的大脑易受伤害。与大多数深神经网络进行的单次推理相反,大脑通常会通过证据积累机制解决决策,在面对嘈杂的输入时可能会以准确性交易。该机制由漂移扩散模型(DDM)很好地描述。在DDM中,决策被建模为一个过程,在该过程中,嘈杂的证据被积累到阈值。从DDM中汲取灵感,我们提出了基于辍学的漂移扩散模型(DDDM),该模型结合了测试相位液位和DDM,以提高任意神经网络的鲁棒性。辍学在网络中造成了反对扰动的时间无关的噪音,而证据积累机制则保证了合理的决策准确性。神经网络通过在图像,语音和文本分类任务中测试的DDDM增强,这均大大优于其本机对应物,证明了DDDM是针对对抗性攻击的任务不可能的防御。
Despite their outstanding performance in a broad spectrum of real-world tasks, deep artificial neural networks are sensitive to input noises, particularly adversarial perturbations. On the contrary, human and animal brains are much less vulnerable. In contrast to the one-shot inference performed by most deep neural networks, the brain often solves decision-making with an evidence accumulation mechanism that may trade time for accuracy when facing noisy inputs. The mechanism is well described by the Drift-Diffusion Model (DDM). In the DDM, decision-making is modeled as a process in which noisy evidence is accumulated toward a threshold. Drawing inspiration from the DDM, we propose the Dropout-based Drift-Diffusion Model (DDDM) that combines test-phase dropout and the DDM for improving the robustness for arbitrary neural networks. The dropouts create temporally uncorrelated noises in the network that counter perturbations, while the evidence accumulation mechanism guarantees a reasonable decision accuracy. Neural networks enhanced with the DDDM tested in image, speech, and text classification tasks all significantly outperform their native counterparts, demonstrating the DDDM as a task-agnostic defense against adversarial attacks.