论文标题
评估用于深入了解医学数据的推理攻击模型
Evaluation of Inference Attack Models for Deep Learning on Medical Data
论文作者
论文摘要
深度学习引起了人们对医疗保健和医疗社区的广泛兴趣。但是,几乎没有研究为医疗应用培训的深网所创建的隐私问题。最近开发的推论攻击算法表明,图像和文本记录可以由具有查询深网络的恶意政党重建。这引起了人们的关注,即医学图像和电子健康记录包含敏感的患者信息很容易受到这些攻击的影响。本文旨在吸引医学深度学习社区中的研究人员对这个重要问题的兴趣。我们评估了两个突出的推理攻击模型,即属性推理攻击和模型反演攻击。我们表明,他们可以以高保真度重建现实世界中的医学图像和临床报告。然后,我们研究如何使用防御机制来保护患者的隐私,例如标签扰动和模型扰动。我们提供了与防御能力的原始和医学深度学习模型之间攻击结果的比较。实验评估表明,我们提出的防御方法可以有效地减少从推理攻击中进行医学深度学习的潜在隐私泄漏。
Deep learning has attracted broad interest in healthcare and medical communities. However, there has been little research into the privacy issues created by deep networks trained for medical applications. Recently developed inference attack algorithms indicate that images and text records can be reconstructed by malicious parties that have the ability to query deep networks. This gives rise to the concern that medical images and electronic health records containing sensitive patient information are vulnerable to these attacks. This paper aims to attract interest from researchers in the medical deep learning community to this important problem. We evaluate two prominent inference attack models, namely, attribute inference attack and model inversion attack. We show that they can reconstruct real-world medical images and clinical reports with high fidelity. We then investigate how to protect patients' privacy using defense mechanisms, such as label perturbation and model perturbation. We provide a comparison of attack results between the original and the medical deep learning models with defenses. The experimental evaluations show that our proposed defense approaches can effectively reduce the potential privacy leakage of medical deep learning from the inference attacks.