论文标题
垂直联合学习中推理攻击的隐私
Privacy Against Inference Attacks in Vertical Federated Learning
论文作者
论文摘要
考虑了垂直联合学习,其中一个活跃的聚会可以访问真正的班级标签,希望通过利用无源派对的更多功能来构建分类模型,该聚会无法访问该标签,以提高模型的准确性。在预测阶段,以逻辑回归为分类模型,提出了几种推理攻击技术,即可以采用对手,即主动方来重建被视为敏感信息的被动方的特征。这些攻击主要基于集合中心的经典概念,即Chebyshev中心,比文献中提出的攻击优于文献中的概念。此外,为上述攻击提供了几种理论性能保证。随后,我们考虑对手完全重建被动方所需的最小信息。特别是,当被动方拥有一项功能时,并且对手只意识到所涉及的参数的符号时,当预测数量足够大时,它可以完美地重建该功能。接下来,作为一种防御机制,提出了一个保护隐私的计划,使对手的重建攻击恶化,同时保留了VFL带来了活跃政党的全部好处。最后,实验结果证明了拟议的攻击和隐私保护方案的有效性。
Vertical federated learning is considered, where an active party, having access to true class labels, wishes to build a classification model by utilizing more features from a passive party, which has no access to the labels, to improve the model accuracy. In the prediction phase, with logistic regression as the classification model, several inference attack techniques are proposed that the adversary, i.e., the active party, can employ to reconstruct the passive party's features, regarded as sensitive information. These attacks, which are mainly based on a classical notion of the center of a set, i.e., the Chebyshev center, are shown to be superior to those proposed in the literature. Moreover, several theoretical performance guarantees are provided for the aforementioned attacks. Subsequently, we consider the minimum amount of information that the adversary needs to fully reconstruct the passive party's features. In particular, it is shown that when the passive party holds one feature, and the adversary is only aware of the signs of the parameters involved, it can perfectly reconstruct that feature when the number of predictions is large enough. Next, as a defense mechanism, a privacy-preserving scheme is proposed that worsen the adversary's reconstruction attacks, while preserving the full benefits that VFL brings to the active party. Finally, experimental results demonstrate the effectiveness of the proposed attacks and the privacy-preserving scheme.