论文标题
边缘设备之间合作模型更新的联合学习方法
An On-Device Federated Learning Approach for Cooperative Model Update between Edge Devices
论文作者
论文摘要
大多数Edge AI都专注于在服务器计算机上进行培训时,在资源有限的边缘设备上的预测任务。但是,由于环境随着时间的变化而导致模型变得过时,因此需要在边缘设备上进行重新训练或自定义模型。为了遵循这样的概念漂移,最近提出了基于设备学习方法的神经网络,因此Edge设备在运行时训练传入数据以更新其模型。在这种情况下,由于在分布式边缘设备上进行了培训,因此问题是,每个边缘设备只能使用有限的培训数据。为了解决这个问题,一种方法是一种合作学习或联合学习,边缘设备通过使用其他设备收集的设备来交换训练有素的结果并更新其模型。在本文中,作为一种在设备学习算法中,我们专注于OS-ELM(在线顺序极端学习机),以根据最新样本进行顺序训练模型,并将其与自动编码器结合使用以进行异常检测。我们将其扩展为联合学习的设备,以便Edge设备可以通过使用其他边缘设备收集的设备来交换其训练有素的结果并更新其模型。此合作模型更新是一声的,同时可以重复应用其同步其模型。通过从汽车,人类活动数据集和MNIST数据集生成的异常检测任务来评估我们的方法。结果表明,提议的联合学习的设备可以通过将来自多个边缘设备的训练的结果整合到基于传统的反向传统的神经网络和传统的联合学习方法中,并以较低的计算或通信成本来产生合并的模型。
Most edge AI focuses on prediction tasks on resource-limited edge devices while the training is done at server machines. However, retraining or customizing a model is required at edge devices as the model is becoming outdated due to environmental changes over time. To follow such a concept drift, a neural-network based on-device learning approach is recently proposed, so that edge devices train incoming data at runtime to update their model. In this case, since a training is done at distributed edge devices, the issue is that only a limited amount of training data can be used for each edge device. To address this issue, one approach is a cooperative learning or federated learning, where edge devices exchange their trained results and update their model by using those collected from the other devices. In this paper, as an on-device learning algorithm, we focus on OS-ELM (Online Sequential Extreme Learning Machine) to sequentially train a model based on recent samples and combine it with autoencoder for anomaly detection. We extend it for an on-device federated learning so that edge devices can exchange their trained results and update their model by using those collected from the other edge devices. This cooperative model update is one-shot while it can be repeatedly applied to synchronize their model. Our approach is evaluated with anomaly detection tasks generated from a driving dataset of cars, a human activity dataset, and MNIST dataset. The results demonstrate that the proposed on-device federated learning can produce a merged model by integrating trained results from multiple edge devices as accurately as traditional backpropagation based neural networks and a traditional federated learning approach with lower computation or communication cost.