论文标题

通过样本雅各布正则化训练无核心的多模式学习

Training-Free Robust Multimodal Learning via Sample-Wise Jacobian Regularization

论文作者

Gao, Zhengqi, Ren, Sucheng, Xue, Zihui, Li, Siting, Zhao, Hang

论文摘要

多模式融合成为一种吸引人的技术,可以改善许多任务的模型性能。然而,这种融合方法的鲁棒性很少参与本文献。在本文中,我们提出了一种通过利用条件独立性假设和雅各布式正则化来提出一种无训练的鲁棒后期融合方法。我们的关键是最大程度地减少Jacobian基质的Frobenius Norm,在该规范中,将最终的优化问题放松到可拖动的Sylvester方程中。此外,我们提供了我们方法的理论错误界限,以及有关额外模态功能的一些见解。关于AV-MNIST,RAVDESS和VGGSOUND的几个数值实验证明了我们方法在对抗攻击和随机腐败下的功效。

Multimodal fusion emerges as an appealing technique to improve model performances on many tasks. Nevertheless, the robustness of such fusion methods is rarely involved in the present literature. In this paper, we propose a training-free robust late-fusion method by exploiting conditional independence assumption and Jacobian regularization. Our key is to minimize the Frobenius norm of a Jacobian matrix, where the resulting optimization problem is relaxed to a tractable Sylvester equation. Furthermore, we provide a theoretical error bound of our method and some insights about the function of the extra modality. Several numerical experiments on AV-MNIST, RAVDESS, and VGGsound demonstrate the efficacy of our method under both adversarial attacks and random corruptions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源