论文标题
使用横向gan评估合成3-D PET成像中的隐私泄漏
Assessing Privacy Leakage in Synthetic 3-D PET Imaging using Transversal GAN
论文作者
论文摘要
在很大程度上,由于隐私问题,很难培训有关疾病诊断或图像分割的医学图像的计算机视觉相关算法。因此,高度寻求生成图像模型以促进数据共享。但是,需要研究3-D生成模型,需要研究其隐私泄漏。我们使用头部和颈部宠物图像作为案例研究介绍了3-D生成型横向GAN(TRGAN)。我们为模型定义了图像保真度,实用性和隐私的定量度量。在培训过程中评估了这些指标,以确定理想的保真度,公用事业和隐私权权衡,并建立这些参数之间的关系。我们表明,Trgan的歧视者很容易受到攻击,并且攻击者可以以几乎完美的精度识别用于训练中的样品(AUC = 0.99)。我们还表明,仅访问发电机的攻击者无法可靠地分类样品是否已用于训练(AUC = 0.51)。这表明Trgan发电机(而不是歧视者)可以用于共享具有最小隐私风险的合成3-D PET数据,同时保持良好的效用和保真度。
Training computer-vision related algorithms on medical images for disease diagnosis or image segmentation is difficult in large part due to privacy concerns. For this reason, generative image models are highly sought after to facilitate data sharing. However, 3-D generative models are understudied, and investigation of their privacy leakage is needed. We introduce our 3-D generative model, Transversal GAN (TrGAN), using head & neck PET images which are conditioned on tumour masks as a case study. We define quantitative measures of image fidelity, utility and privacy for our model. These metrics are evaluated in the course of training to identify ideal fidelity, utility and privacy trade-offs and establish the relationships between these parameters. We show that the discriminator of the TrGAN is vulnerable to attack, and that an attacker can identify which samples were used in training with almost perfect accuracy (AUC = 0.99). We also show that an attacker with access to only the generator cannot reliably classify whether a sample had been used for training (AUC = 0.51). This suggests that TrGAN generators, but not discriminators, may be used for sharing synthetic 3-D PET data with minimal privacy risk while maintaining good utility and fidelity.