论文标题

使用典型网络联合的半监督学习

Federated Semi-Supervised Learning with Prototypical Networks

论文作者

Kim, Woojung, Park, Keondo, Sohn, Kihyuk, Shu, Raphael, Kim, Hyung-Sin

论文摘要

随着边缘设备的计算能力的增加,联合学习(FL)出现了,以实现模型培训而无需隐私问题。大多数现有研究都认为数据在客户端完全标记。但是,实际上,标记数据的量通常受到限制。最近,探索了联邦半监督学习(FSSL),作为一种有效利用未标记数据的方式。在这项工作中,我们提出了ProtoFSSL,这是一种基于原型网络的新型FSSL方法。在ProtoFSSL中,客户通过轻量级原型共享知识,从而防止本地模型发散。为了计算未标记数据的损失,每个客户端都会基于共享原型创建准确的伪标签。伪标签与标记的数据共同为局部原型提供了训练信号。与基于权重共享的FSSL方法相比,基于原型的基于离心的间距共享大大降低了通信和计算成本,从而使更多客户之间的知识共享更加频繁,以提高准确性。在多个数据集中,与最近有或没有知识共享的FSSL方法(例如FixMatch,FedRGD和FEDMATCH)相比,ProtoFSSL的准确性更高。在SVHN数据集上,ProtoFSSL与完全监督的FL方法相当。

With the increasing computing power of edge devices, Federated Learning (FL) emerges to enable model training without privacy concerns. The majority of existing studies assume the data are fully labeled on the client side. In practice, however, the amount of labeled data is often limited. Recently, federated semi-supervised learning (FSSL) is explored as a way to effectively utilize unlabeled data during training. In this work, we propose ProtoFSSL, a novel FSSL approach based on prototypical networks. In ProtoFSSL, clients share knowledge with each other via lightweight prototypes, which prevents the local models from diverging. For computing loss on unlabeled data, each client creates accurate pseudo-labels based on shared prototypes. Jointly with labeled data, the pseudo-labels provide training signals for local prototypes. Compared to a FSSL approach based on weight sharing, the prototype-based inter-client knowledge sharing significantly reduces both communication and computation costs, enabling more frequent knowledge sharing between more clients for better accuracy. In multiple datasets, ProtoFSSL results in higher accuracy compared to the recent FSSL methods with and without knowledge sharing, such as FixMatch, FedRGD, and FedMatch. On SVHN dataset, ProtoFSSL performs comparably to fully supervised FL methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源