论文标题
mix2fld:与双向混合的上行链路联合蒸馏后,下行链路联合学习
Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup
论文作者
论文摘要
这封信提出了一种新颖的沟通效率和隐私的分布式机器学习框架,即Cuin2fld。为了解决上行链路链路链接不对称的问题,将本地模型输出上传到上行链路中的服务器,如联合蒸馏(FD),而全局模型参数则在下行链路中下载,如联合学习(FL)。在从设备收集其他数据示例之后,这需要服务器上的模型输出到参数转换。为了保留隐私而在不损害准确性的同时,将线性混合本地样品上传,并在服务器上的不同设备上呈负相反。数值评估表明,与FL相比,Mix2FLD在不对称上行链路 - 下链接通道下的收敛时间最多提高了16.7%,同时将收敛时间降低了18.8%。
This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD. To address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in federated learning (FL). This requires a model output-to-parameter conversion at the server, after collecting additional data samples from devices. To preserve privacy while not compromising accuracy, linearly mixed-up local samples are uploaded, and inversely mixed up across different devices at the server. Numerical evaluations show that Mix2FLD achieves up to 16.7% higher test accuracy while reducing convergence time by up to 18.8% under asymmetric uplink-downlink channels compared to FL.