论文标题

解开信息瓶颈

Disentangled Information Bottleneck

论文作者

Pan, Ziqi, Niu, Li, Zhang, Jianfu, Zhang, Liqing

论文摘要

信息瓶颈(IB)方法是一种用于提取信息与从源随机变量预测目标随机变量相关的信息的技术,该变量通常是通过优化IB Lagrangian来实现的,以平衡压缩和预测项。但是,IB Lagrangian很难优化,并且需要进行Lagrangian乘数的多次调整值。此外,我们表明预测性能严格降低,因为在优化IB Lagrangian期间压缩变得越来越强。在本文中,我们从监督的解剖学角度实施了IB方法。具体而言,我们引入了删除的信息瓶颈(DISENIB),该信息瓶颈(DISENIB)是一致的,而无需目标预测性能损失(最大压缩)。理论和实验结果表明,我们的方法在最大压缩方面是一致的,并且在概括,对对抗性攻击,分布外检测和监督分解方面表现良好。

The information bottleneck (IB) method is a technique for extracting information that is relevant for predicting the target random variable from the source random variable, which is typically implemented by optimizing the IB Lagrangian that balances the compression and prediction terms. However, the IB Lagrangian is hard to optimize, and multiple trials for tuning values of Lagrangian multiplier are required. Moreover, we show that the prediction performance strictly decreases as the compression gets stronger during optimizing the IB Lagrangian. In this paper, we implement the IB method from the perspective of supervised disentangling. Specifically, we introduce Disentangled Information Bottleneck (DisenIB) that is consistent on compressing source maximally without target prediction performance loss (maximum compression). Theoretical and experimental results demonstrate that our method is consistent on maximum compression, and performs well in terms of generalization, robustness to adversarial attack, out-of-distribution detection, and supervised disentangling.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源